Sharing training data/fine-tuned model #780
Replies: 5 comments 6 replies
-
@camappel - did you close this on purpose? Just trying to figure out if you still want feedback or decided to wait |
Beta Was this translation helpful? Give feedback.
-
I thought it might be better to wait until I have a .safetensor file to share, but would greatly appreciate feedback in the meantime! |
Beta Was this translation helpful? Give feedback.
-
After training the prebuilt livestock detector model on a dataset of 3,390 labelled cows (2269 training annotations, 785 validation annotations, and 339 for testing from the same dataset) from fields in Juchowo, Poland and Wageningen, the Netherlands, the evaluation metrics improved from:
The updated model is saved as a .safetensor file here; can I open a PR on the huggingface? The aim of the project is to demonstrate the open source process of fine-tuning existing models (in order to obtain more accurate detections of livestock in this case) |
Beta Was this translation helpful? Give feedback.
-
I'm working on finishing this right now and I noticed the new load_model function; should I wait until this is added to pip? FYI:
|
Beta Was this translation helpful? Give feedback.
-
PR with updated checkpoint opened here |
Beta Was this translation helpful? Give feedback.
-
I've been trying to fine-tune the live-stock detection model using a large labelled dataset obtained from Harvard Dataverse, and I'd like to discuss best practices regarding open-source training data and model training.
I have made the data available here:
An abridged version of the training process is available here. I used the following hyperparameters:
My goal is to evaluate the model's performance on a validation dataset before and after fine-tuning. Assuming the new dataset yields some increased accuracy, is there somewhere I can upload the updated weights?
Beta Was this translation helpful? Give feedback.
All reactions