We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I was wondering what is the motivation for using this scaling factor during evaluation? depth_pred *= torch.median(depth_gt) / [torch.median(depth_pred)](url) https://github.com/facebookresearch/DistDepth/blob/dde30a4cecd5457f3c4fa7ab7bf7d5c8ea95934a/execute_func.py
depth_pred *= torch.median(depth_gt) / [torch.median(depth_pred)
If we remove this scaling, the results are not aligned with what is reported in the paper. For NYUv2 dataset, the RMSE increases from 0.58 to 0.99
0.58
0.99
Can you please explain why did you use that scaling during evaluation? Is it reasonable to scale-up from ground truth information?
Also in compute_depth_errors(), how did you decide on 1.25 as a threshold?
1.25
The text was updated successfully, but these errors were encountered:
We follow the previous protocols for self-supervised method to evaluate on NYUv2, where median scaling is applied. See P2Net and StructDepth.
1.25 is a conventional usage in the field to measure depth accuracy. See https://arxiv.org/pdf/2003.06620.pdf
Sorry, something went wrong.
No branches or pull requests
I was wondering what is the motivation for using this scaling factor during evaluation?
depth_pred *= torch.median(depth_gt) / [torch.median(depth_pred)
](url)https://github.com/facebookresearch/DistDepth/blob/dde30a4cecd5457f3c4fa7ab7bf7d5c8ea95934a/execute_func.py
If we remove this scaling, the results are not aligned with what is reported in the paper. For NYUv2 dataset, the RMSE increases from
0.58
to0.99
Can you please explain why did you use that scaling during evaluation? Is it reasonable to scale-up from ground truth information?
Also in compute_depth_errors(), how did you decide on
1.25
as a threshold?The text was updated successfully, but these errors were encountered: