You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am conducting research on deep fake text and currently trying to replicate the results on the other models used as discriminators in the paper. The paper mentions a fine-tuned version of BERT in which you extended the maximum sequence length to 1024 by initializing new encodings.
Can you please upload those models? Also, could you tell what top-p threshold was used to generate text fed to BERT for discrimination?
Thanks!
The text was updated successfully, but these errors were encountered:
I am conducting research on deep fake text and currently trying to replicate the results on the other models used as discriminators in the paper. The paper mentions a fine-tuned version of BERT in which you extended the maximum sequence length to 1024 by initializing new encodings.
Can you please upload those models? Also, could you tell what top-p threshold was used to generate text fed to BERT for discrimination?
Thanks!
The text was updated successfully, but these errors were encountered: