Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Frozen Synthetic Dataset #31

Open
hnaik opened this issue Feb 19, 2021 · 0 comments
Open

Frozen Synthetic Dataset #31

hnaik opened this issue Feb 19, 2021 · 0 comments

Comments

@hnaik
Copy link
Contributor

hnaik commented Feb 19, 2021

Based on the code in gengraph and synthetic_structsim modules, I gather that when running training with the synthetic datasets, input graphs are generated on the fly. Based on the randomness routines present in methods such as synthetic_structsim.build_graph() it is evident that every invocation of the method will likely generate a completely different instance of the input graph.

  1. Wouldn't it be more prudent to run a few tens of these iterations, save representations to files and then use those files for all experiments? Yes, there will be some loss in surprise and variety, but at least the experiments would then be repeatable/reproducible, wouldn't you agree?
  2. Are there other downsides to this frozen dataset approach that I might have missed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant