datasets/pose/ #8081
Replies: 11 comments 49 replies
-
Except for kpt_shape: and flip_idx: , can I use "skeleton: []" in my yaml to tell my model how to connect each key point? Or I should write: annotator = Annotator(im=your_image_array, skeleton=custom_skeleton) in my train.py? Or I can do noting while training, and do key point connect job after train? |
Beta Was this translation helpful? Give feedback.
-
How to get the skeleton values? Do they have to be based on the subject? If I don't have custom skeleton |
Beta Was this translation helpful? Give feedback.
-
I want to use yolo pose estimation on custom dataset, for person activity detection, for annotations I am using cvat annotation tool for skeleton making for each separate activity like working, using_mobile and discussing on each person. After that I am confused on how to extract the json file for yolo format, as what I am getting for my code, when I plot back the contents of the txt file on the jpg file I can't get the suitable output. Code for Converting JSON files according to YOLO format: import json Load the JSON filewith open('person_keypoints_default.json') as f: Define a mapping from keypoints to pairs for skeletonskeleton_pairs = data['categories'][0]['skeleton'] Loop over the annotations and process eachfor annotation in data['annotations']:
Code for View Plottings after conversions for verification: import cv2 def draw_keypoints(image, keypoints, pairs, colors):
def plot_image(image_path, label_path):
Example usageimage_path = './images/20240212_165900.jpg' If anyone here could guide me on this, I will be highly obliged. |
Beta Was this translation helpful? Give feedback.
-
I would like to validate some of these models on the raspberry pi, but find the coco8 dataset to be too small and the coco dataset to be too large to store on a pi. How can I create my own dataset and customize how many images are stored within it? |
Beta Was this translation helpful? Give feedback.
-
Hello, I want to take the coordinates of each keypoint and classify them with numbers. For example, keypoint 1 has coordinates x,y; keypoint 2 has coordinates x1,y2 and so on, I want to do this for my entire dataset. However, I'm getting errors. model_path = 'C:/Users/idigi/Box/UIUC_ACADEMIC/PHD_THESIS/PROJECTS/GAIT_CATTLE/RESULTS/POSE_MODEL/POSE_TRAIN_MODEL/RESULTS_V1_20EPOCHS_500IMAGES/weights/best.pt' image_path = 'C:/Users/idigi/Box/UIUC_ACADEMIC/PHD_THESIS/PROJECTS/GAIT_CATTLE/ANGLE_DATASET/DAY/06_13_19/6.13.19-124F/6.13.19-124F_Color_1560438142923.505_198.jpg' model = YOLO(model_path) results = model(image_path)[0] for result in results: cv2.imshow('img', img) And this is the error: Could you help, please? Thank you |
Beta Was this translation helpful? Give feedback.
-
Hi there, I want to label some special point on the image as a key point. It is not for pose detection, usually only one or sometimes two equivalent points per image. What initial weight should I use for training? Thanks! |
Beta Was this translation helpful? Give feedback.
-
Hello, when I trained yolov8-pose, some indicators were not displayed, only various loss were displayed, but these indicators were included in the result.csv file, such as map50, MAP50-90 |
Beta Was this translation helpful? Give feedback.
-
I've used YoloV8 successfully for instance segmentation of community cats. Now I'm trying to use it for pose estimation and getting nowhere. I have a dataset of 1130 images (output from the instance segmentation) with 20 labeled keypoints. I used the following code to train a pre-trained model: model = YOLO('yolov8m-pose.pt') # Use a pre-trained model results = model.train(data='/content/data/data.yaml', epochs=100, project=project, name=name, patience=0, batch=4, imgsz=640) My YAML file is pretty vanilla: train: /content/train/images kpt_shape: [20, 3] nc: 1 roboflow: My results are dismal. Literally, there are no detections found on any of my test images. The precision and recall for the pose is nearly zero. I feel I must be doing something really wrong to achieve these results. I looked at one of the label files. Reformatted and with extra blanks to show the keypoints it is as follows: 0 # One class There are 8 visible keypoints (if 2 means visible), which corresponds to the image. But I don't know how to interpret the x,y coordinates in the second line. If they are x-center and y-center of the bounding box, then some of the keypoints are outside the box. If they are the lower left of the bounding box, then other keypoints are outside the bounding box. I;m not sure where to begin to unravel what is happening. Any advice would be appreciated. |
Beta Was this translation helpful? Give feedback.
-
Hi, I would like to know if the [email protected] for the key point detection using Ultralytics YOLO v8 is the key point the similarity used by the coco dataset. Also, what does this 0.50 mean? It is not the IoU right? |
Beta Was this translation helpful? Give feedback.
-
My training set label file is 0 0.47899 0.45495 0.20011 0.29458 0.38743 0.31863 2 0.54499 0.31326 2 0.57668 0.51331 2 0.38480 0.53105 2 0.38988 0.38655 2 0.54744 0.37768 2 0.56915 0.57890 2 0.39076 0.59874 2 and I set kpt_shape=[8,3] in the yaml file to detect 8 key points. Why does it prompt "train: WARNING and " File "C:\applications\miniconda3\envs\yolov8\Lib\site-packages\ultralytics\data\dataset.py", line 161, in get_labels thank you |
Beta Was this translation helpful? Give feedback.
-
Hello ultralytics team, |
Beta Was this translation helpful? Give feedback.
-
datasets/pose/
Understand the YOLO pose dataset format and learn to use Ultralytics datasets to train your pose estimation models effectively.
https://docs.ultralytics.com/datasets/pose/
Beta Was this translation helpful? Give feedback.
All reactions