-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About the DeepRT+ calibration #13
Comments
Hi! For the calibration from a pretrained model (run1) to another LC run (run2), please:
|
Thank you for telling me the usage of the calibration function. Could I know what algorithm and methods are using in the RT calibration. Like piecewise linear regression? |
The method is transfer learning. The model is first pre-trained on data1 from LC run1, then fine-tuned on data2 from LC run2. The only difference between using/not using this pretraining & fine-tuning scheme is that: without it, we train a deep neural network (DNN) from scratch whose weights are randomly initialized; whereas pretraining provides a well-educated guess for the weight of DNN so that DNN can converge to a better minimum faster by stochastic gradient descent. This pretraining & fine-tuning routine is extremely widely adopted in Computer Vision, Natural Language Processing, etc. In LCMS, our work is perhaps the first that uses this to integrate/calibrate multiple runs across different LCs/experiments/labs. |
Fully understand. Thanks a lot! |
Hi, could you please tell me that how the DeepRT+ do the calibration using a certain ratio of the test-group peptides after pretraining with the big data?
Thanks.
The text was updated successfully, but these errors were encountered: