You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As you've mentioned in your article, you had 120GB of data before compressing. Yet it seems to me, at least according to your data preparation codes, you've just utilized 1024 folds which are around 10 GB of data (TF Records) randomly sampled from the original data (at least in my case). Would you please guide me on this? Maybe by giving more details about the data size for training the language model.
The text was updated successfully, but these errors were encountered:
As you've mentioned in your article, you had 120GB of data before compressing. Yet it seems to me, at least according to your data preparation codes, you've just utilized 1024 folds which are around 10 GB of data (TF Records) randomly sampled from the original data (at least in my case). Would you please guide me on this? Maybe by giving more details about the data size for training the language model.
The text was updated successfully, but these errors were encountered: