-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Segmentation fault while running in ros with Python #3
Comments
Since nothing is printed before the segmentation fault, this seems to happen before it is attempted to load the model. You can try using the faulthandler module to find out where the segmentation fault happened. Add import faulthandler
faulthandler.enable() before the first import to yolact_ros (directly after #!/usr/bin/env python). Aside from that, check which cuda version you have installed (nvcc --version) and verify that you installed the correct pytorch version for your cuda installation. |
Hey! Can you let me know the steps you have used to use this wrapper like integration to testing. Sorry for asking such a noob question as I am new to ROS I don't know much about it. Steps I followed: |
Hey! Can you let me know the steps you have used to use this wrapper like integration to testing. Sorry for asking such a noob question as I am new to ROS I don't know much about it. Steps I followed: |
@fgonzalezr1998 Here is the error I am facing |
rosrun yolact_ros yolact_ros is the correct way to run it. The error seems to happen during the initialization of a layer from Yolact. Which model are you using? I would recommend using "yolact_base_54_800000.pth" for the start (you can download the models from here). You can change the default model path in line 367 of yolact_ros or pass the path as ROS parameter. The yolact_plus model currently used as default requires you to install DCNv2 (see here), so it's easier to use the base model. If you want to use Yolact++: the version of DCNv2 in the yolact repository doesn't work with the newest pytorch version, so I installed it from this fork. As a further suggestion, it seems like you are using cuda 10.0. The newest pytorch release doesn't have a cuda 10.0 release, so I would recommend installing torch 1.4, if you don't want to upgrade you cuda installation. Use the following command:
|
I've updated the readme with usage instructions now and changed the default model to yolact_base. |
I am using the same model as you have recommende. Still I get the same error and I am using 1.1.0v of pytorch. |
That's a pretty old version. I tested downgrading to it and also got errors. Try upgrading to torch 1.4.0 with the command above and check if it works then. |
The text was updated successfully, but these errors were encountered: