Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some problem with PSNR calculating #54

Open
tingzhongyue opened this issue Oct 11, 2024 · 5 comments
Open

Some problem with PSNR calculating #54

tingzhongyue opened this issue Oct 11, 2024 · 5 comments

Comments

@tingzhongyue
Copy link

Thank you for your great work and contribution.

When I tried to train 4dgs, I found the results saved by .pth are inconsistent with the evaluation results during training. For example, in sear_steak, the PSNR of 4dgs at epoch 6000 when training is 33.02, but when I load pth for testing, it gives me 32.63.

My test code is written according to this issuehttps://github.com/fudan-zvg/4d-gaussian-splatting/issues/12.

And PSNR evaluation code is shown below

psnr_test = 0.0
for idx, batch_data in enumerate(tqdm(views, desc="Rendering progress")):
    gt_image, viewpoint = batch_data
    gt_image = gt_image.cuda()
    viewpoint = viewpoint.cuda()
    render_pkg = render(viewpoint, gaussians, pipeline, background)
    image = torch.clamp(render_pkg["render"], 0.0, 1.0)
    psnr_test += psnr(image, gt_image).mean().double()
psnr_test /= len(views)
print(psnr_test)

Do you ever meet some problems like this, and do you know any method to solve it?

@tingzhongyue
Copy link
Author

It is worth mentioning that when I tested my code on dnerf, it worked well. And on the dynerf dataset, it also worked when it was 3000th iteration. However, once the iterations increased, this problem would appear. And this also happens when I try to test the saved ply file

@Alexander0Yang
Copy link
Collaborator

Hi, @tingzhongyue,

I haven't encountered this issue. For coffee_martini and flame_salmon, the testing script provided in #12 might cause a significant drop in performance because it does not load the envmap, but for other scenes, this script is sufficient to reproduce the performance reported in the paper, as mentioned in #24.

Actually, you can check the rendered images and the input parameters of the rasterizer in both the training process and your own testing scripts to provide more information for identifying the problem.

@Duanener
Copy link

@tingzhongyue Hello, have you solved it, please? And I have similar problem.

@tingzhongyue
Copy link
Author

I have solved this problem. This is due to a very small mistake, when modeling the scene from N3V dataset, time duration is [0,10]. But in vanilla code, the time duration is the default value, [-0.5,0.5]. You can change the time duration setting to solve it. Certainly, the environment is also necessary for coffee_martini and flame_salmon. Wish I could help you.

@Duanener
Copy link

Thanks a lot! It works!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants