You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Based on the code in gengraph and synthetic_structsim modules, I gather that when running training with the synthetic datasets, input graphs are generated on the fly. Based on the randomness routines present in methods such as synthetic_structsim.build_graph() it is evident that every invocation of the method will likely generate a completely different instance of the input graph.
Wouldn't it be more prudent to run a few tens of these iterations, save representations to files and then use those files for all experiments? Yes, there will be some loss in surprise and variety, but at least the experiments would then be repeatable/reproducible, wouldn't you agree?
Are there other downsides to this frozen dataset approach that I might have missed.
The text was updated successfully, but these errors were encountered:
Based on the code in
gengraph
andsynthetic_structsim
modules, I gather that when running training with the synthetic datasets, input graphs are generated on the fly. Based on the randomness routines present in methods such assynthetic_structsim.build_graph()
it is evident that every invocation of the method will likely generate a completely different instance of the input graph.The text was updated successfully, but these errors were encountered: