SAM segmentation masks to YOLO format #6421
Replies: 7 comments 22 replies
-
@akashsateesha hello! It's great to hear that the SAM model has provided you with impressive segmentation masks from your bounding box annotations. To use these masks for instance segmentation with YOLOv8, you'll need to convert them into a format that YOLOv8 can understand for training. YOLOv8 supports training on segmentation tasks, and the expected format for segmentation masks is one where each instance mask is encoded as a color-coded PNG image, with each color representing a different class. The conversion process from binary masks to the required format involves assigning unique color values to each instance and class in your dataset. Here's a general outline of the steps you would take to convert your binary masks to the YOLOv8 segmentation format:
Here's a simplified example of how you might perform this conversion in Python: import cv2
import numpy as np
# Load binary mask
binary_mask = cv2.imread('path_to_binary_mask.png', 0)
# Find contours
contours, _ = cv2.findContours(binary_mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Create an empty image for the color-coded mask
color_mask = np.zeros((binary_mask.shape[0], binary_mask.shape[1], 3), dtype=np.uint8)
# Assign a unique color to each instance
for i, contour in enumerate(contours):
color = [int(c) for c in np.random.choice(range(256), size=3)] # Random color for each instance
cv2.drawContours(color_mask, [contour], -1, color, -1) # Fill the contour with the color
# Save the color-coded mask
cv2.imwrite('path_to_color_mask.png', color_mask) Please note that the above code is a basic example and may need to be adapted to your specific use case, especially regarding the assignment of class-specific colors and handling of multiple instances. Once you have converted your masks, you can proceed to train your YOLOv8 model on the instance segmentation task using your dataset. For more detailed information on training YOLOv8 models for segmentation, please refer to the Segmentation Task Documentation. If you encounter any issues or have further questions, feel free to reach out. Good luck with your instance segmentation model training! 🚀 |
Beta Was this translation helpful? Give feedback.
-
I have the exact same issue as @nicholas-aplin Here is my directory structure
Here is my
I have the masks in Here is my training code
@pderrenger please suggest |
Beta Was this translation helpful? Give feedback.
-
Hi Dear Paula @pderrenger , Would you clarify on the snippet of code you've provided earlier, which suppose to convert binary masks to yolov8 png masks format.
The confusion is: Your code creates a) 3 channel png mask, and b) colors are assigned randomly and uniquely for each instance in the image. However later on in the thread you are saying that png mask should be a) greyscale and b) colors should correspond to class id's, not random. Which way is correct then? Say we have
Shouldn't we create single channel png mask and assign colors as following:
Also you mention that there should be Kindly advice. |
Beta Was this translation helpful? Give feedback.
-
When I read through the ultralytics dataset format documentation it said the labels have to be text files with each line denoting the class id followed by the normalized boundary coordinates. I can't find anything saying that colorized segmentation masks are an option. Am I missing something in the documentation? |
Beta Was this translation helpful? Give feedback.
-
Hi. I am using/fine-tuning YOLOv8x-seg to segment (only one type of objects) in my images. So there is only one class in my images that I am interested in. When I test the model (after fine-tuning), it segments the objects in most cases, but the results are not fine. Sometimes extra area (other than objects) is included - or sometimes the entire object is not segmented. My code basically trains/fine-tunes like this: yolo task=segment mode=train model=yolov8x-seg.pt data='C:\Users\user1\YOLO\ultralytics\ultralytics\config_new.yaml' epochs=5 imgsz=640 My data is structured as follows: dataset/ If I put text files in ‘labels’ (train and val) (these files carry normalised coordinates of the rectangular object I want to segment), the training runs successfully. Although, I believe this is the format for detection task. But when I test after training on data like this, I get segmented images that are not good but at least segmented objects in my test images. path: C:\Users\user1\YOLO\ultralytics\ultralytics\datasets nc: 1 I would highly appreciate any feedback on this. |
Beta Was this translation helpful? Give feedback.
-
hello, @glenn-jocher @pderrenger
my data.yaml as follows :
and my train code is a follow :
however, I got this error when training :
I am not sure where I made a mistake or missing something any one has an idea??? |
Beta Was this translation helpful? Give feedback.
-
Did u guys automate ur forum replies with GPT? |
Beta Was this translation helpful? Give feedback.
-
Hey Everyone !!
I have a dataset with only bounding box annotations . I wanted to generate Segmentation masks using the available bounding box and made us of SAM model . SAM gives Segmentation masks in binary format , when I plotted the masks the results very pretty impressive .
Now I want to built an instance segmentation model on the above dataset using YOLOV8 or YOLOV5 . But for this I want to convert my Segmentation masks in binary format to YOLO format. I have tried with the opencv"s find contour too but it was a failure . Can some one please guide me there !!
Thank You !!
Beta Was this translation helpful? Give feedback.
All reactions