-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update prepare_sim.py #142
Conversation
Updated Menv correction at boundaries 1. Added more randoms, such that after cutting into an octant, we have twelve times more randoms than haloes. 2. Changed random correction from multiplying to division.
for more information, see https://pre-commit.ci
It seems like this really is a bug. I will take a look in a bit just to make sure. I feel less certain about increasing the number of randoms, however. The code is already choking up for some of the higher redshifts. We should probably estimate how much memory it uses at most. |
Thanks for the reply. Regarding the randoms. If I am not misunderstanding, the randoms are used from lines 418-432, and this part is reasonably fast for me. At least I do not observe a significant difference in speed when increasing the number of randoms. |
Could I just ask you what redshifts you are running prepare_sim for?
…On Fri, Jul 12, 2024 at 4:27 PM Pierre Burger ***@***.***> wrote:
Thanks for the reply. Regarding the randoms. If I am not misunderstanding,
the randoms are used from lines 418-432, and this part is reasonably fast
for me. At least I do not observe a significant difference in speed when
increasing the number of randoms.
—
Reply to this email directly, view it on GitHub
<#142 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AKJFGGB73YJEATVGOAR3IMDZL7RW3AVCNFSM6AAAAABKXQNH4WVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMRVG4YDGMZZGQ>
.
You are receiving this because your review was requested.Message ID:
***@***.***>
--
*Boryana Hadzhiyska,*
*Postdoctoral Fellow, UC Berkeley & Berkeley Lab*
|
yes of course. Currently, I am only going up to z=0.4 |
Thanks. I made changes to another branch to fix the reference files for the tests and changed the number of randoms to 50 (at z ~ 1 - 1.5 it becomes quite expensive, but hopefully it is ok). I will close this PR now (your commit appears in the history) Thank you very much for discovering this bug! (Accessed via |
Yes, that's okay. I think ~ five times more randoms should be enough. Thank you for approving this so quickly. |
Hello
I have found an issue with using the assembly bias parameters. Depending on the Bcent or Bsat value, I found pronounced over/under densities at the edges (in ra, dec and redshift) of the individual light cones. The issue is in the prepare_sim.py file, where the Menv are corrected at the boundaries of the light cones. In particular, line 445 multiplies instead of being divided by rand_norm. It needs to be divided because rand_norm is the number of randoms divided by the expected number of randoms, so the more volume is missing, the smaller rand_norm gets. To correct the Menv calculations, they must be divided by rand_norm to upweight the regions with a smaller effective volume.
As a minor improvement, I increased the number of randoms by ten to 100, as after cutting the randoms to the octant, only 1/8 of the randoms survived. Thus, originally, only 1.25 more randoms than haloes were used.
Please let me know if these corrections are correct or if I am misunderstanding them. With these corrections, the over/under densities at the edges almost disappeared.
All the best
Pierre