-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training and Evaluating On Our Own Dataset #3
Comments
We are working on this and will probably update the repo after the ECCV deadline. |
Hi, thanks for your work. When will you update the repo for training on our own dataset? |
We are still occupied with the ECCV supplementary materials. The plan is to update the repo (for customized training) in the upcoming weeks. |
can you share feature extraction code on these public datasets? |
Hi, for THUMOS14 and ActivityNet dataset, we use the feature provided by CMCS. The CMCS uses pytorch-i3d-feature-extraction to extract the features. For TSP features, you may want to refer to TSP-official for details. |
Thanks for your reply! |
Sorry to bother you... I download the feature provided by CMCS, but I find that the feature extracted from THUMOS14 is different from yours. Where do I need to change to extract the same feature as yours? Besides, can you share i3d features extracted from ActivityNet dataset? Thank you very much! Looking forward to your reply~ |
Hi, I think I directly use the feature of CMCS, If you downloaded the full part of CMCS, they provide multiple versions, I think I just choose the I3D version with ten-crop (I cannot remember the details). Also, the I3D features I'm used is also from CMCS repo |
Thank you! My problem has been solved! |
hi, I am trying to train your model on my own dataset and also applyed pytorch-i3d-feature-extraction to extract both RGB and flow features. I compared the features provied by CMCS and the results of I3D, then found they are in different shapes. Do do you have any idea that how rgb and flow features are fused? According to the I3D paper, the final prediction of an action detection task are fused by simply averages results of RGB I3D network and flow I3D network. As for feature extraction, are RGB features and flow features also proccsed in a similar way? |
For the action recognition task, you can directly use the average of RGB prob and Flow prob as the final action prediction. For the action detection task, the common practice is to concatenate the RGB feature and Flow feature into a final feature. For example, extracted features are 1024-d for both RGB and Flow modality. And the final feature dimension will be 2048. |
Thank you! |
Hi! I am looking forward to training and evaluating on our own dataset. It's Now July 26th. When will you update the repo for training on our own dataset? Thank you. |
Hi, Thank you for your interest in our project! Hopefully we will release that tutorial soon (no later than the ECCV conference). |
Hello, thank you for your interest in our project! You can directly send questions by E-mails as I can reply the email more effectively. My email is [email protected]. |
For external scores, it means that we get the classification score for each video by external methods, i.e., a classification model. The |
I got it. Thanks! |
+1 Looking forward to training and evaluating on our own dataset. |
Is there an update on training and evaluating on our own dataset? |
Hi, thanks for sharing the code @happyharrycn. Could you let me know if you are still planning to share a recipe for training and evaluating on our own dataset? |
hello,I want to know how do you compute the GFLOPS? Can you share the code? |
How to get a json file of a comment |
Hi! thanks for your great work, look forward to your way to training and evaluating on our own dataset ?
The text was updated successfully, but these errors were encountered: