-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Posterior convolution for ODE sampling #51
Comments
luckily in our case, the mean of the prior is zero at least.... probably makes things a little simpler |
Here is the demo when adding noise to the two distributions (assuming zero means): |
Small question @b-remy, what is the smallest temperature we can reach with the full non-gaussian prior? |
It seems that we can reach any temperature we want at the end of the annealing, or at least the annealing trace returns the input However, reaching a very low temperature does not seem to be enough to perfectly recover the Wiener estimate. (50 samples to compute the mean ^^^^) @EiffL , you may notice that the power spectrum is much closer to what we observed last week, which is is due to:
|
So, what is the smallest temperature we can reach with the full non-gaussian prior? |
The smallest temperature I managed to reach when fine-tuning one chain is I will update here the results with 20 samples |
That looks really really good no :-) ? |
Yes pretty good :-), but as seen in both Gaussian and full prior examples we still cannot "perfectly" sample the whole posterior. Maybe we will want to smooth the very small scales as you suggested, assuming that the SNR is way too low to contain any information. That's what I'm thinking about currently. |
ok cool, we don't necessarily need to do anything. We can just use that sampling strategy, apply it in the Gaussian case, and state that we can recover the expected posterior mean down to some given scale |
Ok, I've done some math to try to figure out what a convolved posterior might look like.
Let's start from here:
then, the product of these two Gaussians has the following parameters:
Now, let's say we want to convolve this gaussian with sigma_{tfg}, this is the noise we want to add to the full posterior. We can try to add noise to f for instance, and see how much noise sigma_{tf} we should add to f, to achieve the desired amount of noise on fg. After some math, you find that:
So notice here that this will only work if σg is larger than the noise we add.
I tested this on this notebook: https://colab.research.google.com/drive/1-900p399fHa4hnN8bd_OmDg4mlsbrzZZ?usp=sharing
The blue line is orginal fg, solid line is analytically tempered fg, and dashed is original g multiplied by tempered f with the above amount of noise.
So this works, but 2 annoying things:
here this is only when adding noise to one of the components, and this would only work if up to the size of the largest Gaussian. So should instead do this with adding noise to the two distributions.
I've only looked at what happens to the variance, but the mean is also affected by the tempering of components of the product. So we should check that the mean is going to be tractable :-|
The text was updated successfully, but these errors were encountered: