We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
你好,我的实验配置如下: 模型:sbert 脚本:training_sup_text_matching_model_en.py base_model:bert-base-nli-mean-tokens 数据集:英文数据集stsb 做的sbert二分类,但是皮尔森系数只有0.63,我看同样的配置下你的实验结果有0.7+,想问下是有哪些细节我没注意到嘛?谢谢!
The text was updated successfully, but these errors were encountered:
Sorry, something went wrong.
注意pooling方法; sbert使用的是sentencebert_model.py 如果效果还不行,直接对比sentence-transformers的sbert复现看下
pooling用的是代码默认配置 first-last-avg,和您展示的配置一致; 模型用的也是代码默认的配置,sentencebert,调用的是sentencebert_model; 所以有点奇怪为什么能达到0.77的指标,是不是10个epoch训练不充分呢?
20 epochs
改为20个epoch后基本上能复现0.75左右的效果啦~ 感觉上可能受两个因素影响,一个是seed(发现seed不同效果差异还挺大的),一个是warm_up(因为20个epoch的话 lr衰减的速度就不一样了)
No branches or pull requests
你好,我的实验配置如下:
模型:sbert
脚本:training_sup_text_matching_model_en.py
base_model:bert-base-nli-mean-tokens
数据集:英文数据集stsb
做的sbert二分类,但是皮尔森系数只有0.63,我看同样的配置下你的实验结果有0.7+,想问下是有哪些细节我没注意到嘛?谢谢!
The text was updated successfully, but these errors were encountered: