-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Results table for comparison with published results #17
Comments
I would add a |
As far as I saw we already have a method which collects a dataframe containing the combination of best/avg/worst results for each model. Thus, we would need to create a multi-index # published_results is a dictionary of the loaded results for one dataset
# example
assert published_results[('trouillon2016', 'complex')] == {
'rank_type': 'best',
'mean_reciprocal_rank': 0.692,
'hits_at_1': 0.599,
'hits_at_3': 0.759,
'hits_at_10': 0.840,
}
for reference_key, model_name, metrics in published_results.items():
rank_type_key = metrics.pop('rank_type')[0]
reference = r'\cite{' + str(reference_key) + '}^{' + rank_type_key + '}'
for metric_name, value in metrics.items():
df[(model_name, reference), metric_name] = metric_value |
I am not sure what is the best way to get the JSON configs from pykeen |
The old configs are contained in |
Done in #18 |
Thanks! |
For the paper we need a table which shows our obtained results in comparison to the previously published results.
I suggest the following format, here a excerpt from the FB15k table
with the following legend:
The text was updated successfully, but these errors were encountered: