SAHI-like tool for instance segmentation and detection with support of YOLOv8, YOLOv9, FastSAM, and RTDETR #9381
Replies: 6 comments 9 replies
-
Example of information in docs.ultralyticsThis library simplifies SAHI-like inference for instance segmentation tasks, enabling the detection of small objects in images. It caters to both object detection and instance segmentation tasks, supporting a wide range of Ultralytics models. Model Support: The library provides support for various ultralytics deep learning models, including YOLOv8, YOLOv9, FastSAM, and RTDETR. Users can choose from pre-trained options or use custom-trained models to best suit their task requirements. The library also provides a sleek customization of the visualization of the inference results for all models, both in the standard approach (direct network run) and the unique patch-based variant. InstallationYou can install the library via pip: pip install patched_yolo_infer - Click here to visit the PyPI page for Note: If CUDA support is available, it's recommended to pre-install PyTorch with CUDA support before installing the library. Otherwise, the CPU version will be installed by default. NotebooksInteractive notebooks are provided to showcase the functionality of the library. These notebooks cover batch-inference procedures for detection, instance segmentation, inference custom visualization, and more. Each notebook is paired with a tutorial on YouTube, making it easy to learn and implement features.
Examples:Detection example:Instance Segmentation example 1:Instance Segmentation example 2:Usage1. Patch-Based-InferenceTo carry out patch-based inference of YOLO models using our library, you need to follow a sequential procedure. First, you create an instance of the MakeCropsDetectThem class, providing all desired parameters related to YOLO inference and the patch segmentation principle. The output obtained from the process includes several attributes that can be leveraged for further analysis or visualization:
import cv2
from patched_yolo_infer import MakeCropsDetectThem, CombineDetections
# Load the image
img_path = 'test_image.jpg'
img = cv2.imread(img_path)
element_crops = MakeCropsDetectThem(
image=img,
model_path="yolov8m.pt",
segment=False,
shape_x=640,
shape_y=640,
overlap_x=50,
overlap_y=50,
conf=0.5,
iou=0.7,
resize_initial_size=True,
)
result = CombineDetections(element_crops, nms_threshold=0.05, match_metric='IOS')
# Final Results:
img=result.image
confidences=result.filtered_confidences
boxes=result.filtered_boxes
masks=result.filtered_masks
classes_ids=result.filtered_classes_id
classes_names=result.filtered_classes_names Explanation of possible input arguments:MakeCropsDetectThem
CombineDetections
2. Custom inference visualization:Visualizes custom results of object detection or segmentation on an image. Args:
Example of using: from patched_yolo_infer import visualize_results
# Assuming result is an instance of the CombineDetections class
result = CombineDetections(...)
# Visualizing the results using the visualize_results function
visualize_results(
img=result.image,
confidences=result.filtered_confidences,
boxes=result.filtered_boxes,
masks=result.filtered_masks,
classes_ids=result.filtered_classes_id,
classes_names=result.filtered_classes_names,
segment=False,
) |
Beta Was this translation helpful? Give feedback.
-
Great, thank you. I will wait for a response from your team regarding what needs to be done to integrate the tutorial for our library into the ultralytics documentation. I have sent a preliminary version of the text to the "Discussions" section, so I assume it remains to wait for feedback. |
Beta Was this translation helpful? Give feedback.
-
Good day @pderrenger. I think I've figured it out and managed to create a markdown file describing the library. I did it in a similar way to other similar works. I just created a pull request. If you could take a look, please, #9387. Thank you in advance for your huge help. |
Beta Was this translation helpful? Give feedback.
-
Hi @Koldim2001, |
Beta Was this translation helpful? Give feedback.
-
@Koldim2001 How do I export the annotations from the YOLOv8 instance segmentation results to various formats? I would really appreciate it |
Beta Was this translation helpful? Give feedback.
-
@Koldim2001 I like this initiative a lot and have added my fine-tuned YOLOv9-seg model and CVs video capture to take video as input instead of images, along with the code you have given us. What is the best way to make your code handle video instead of images? |
Beta Was this translation helpful? Give feedback.
-
Hi there!
I'd like to share with you a project I've recently worked on. Together with a colleague, we've created a repository that serves as a tool with a SAHI-like inference but specifically tailored for instance segmentation tasks.
Our repository allows for segmenting small objects in images by combining mask predictions from various overlapping patches. We support both YOLOv8-seg and FastSAM. Additionally, we have a variant for object detection tasks, and the key distinction from SAHI is the support for all the current models from the Ultralytics team: YOLOv9, YOLOv8, RTDTR, and others.
I'm a huge fan of Ultralytics, so I'd be thrilled to assist you if you're interested in our project. I'm confident that for many people, the task of finding a large number of segments would be beneficial, especially when using standard Ultralytics models.
Here's the link to the project: YOLO-Patch-Based-Inference.
I have already been in touch with Paula Derrenger. She informed me that I might be able to contribute to the documentation for this library at docs.ultralytics.com. Here is the link to the project discussion - https://github.com/orgs/ultralytics/discussions/8734#discussioncomment-8933879
So I would be happy to participate in this if you have no objections. I have always dreamed of helping the ultralytics team! I am confident that my project will allow for convenient detection and instance segmentation of small objects in an image, as well as provide a custom visualization of the inference results of all the main networks available in the ultralytics library.
In this regard, my library could become a very useful addition to yours.
Beta Was this translation helpful? Give feedback.
All reactions