You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While we discuss the details of the inference in #23 , I think it's clear that to predict weak lensing data and evaluate our log likelihood we will need to be computing the lensing effects (gamma, kappa) of ~100s of halos on each source, for 10^5 sources per sq deg in under a second (to keep our lives sane). I started wondering about fast methods to do this computation in parallel like Map/Reduce - my limited understanding from half an hour's reading is that some databases may enable this automatically/by design.
So, I'm thinking that we are likely to want to do this project with some sort of actual database, preferably one sitting on top of thousands of CPUs!. Since the importance sampling could, I think, be a very general use case of an LSST object catalog that contains interim posterior samples of things like galaxy shape and galaxy photo-z, I'm thinking about boiling this problem down to its simplest possible pseudocode and plain python/everything in memory demo, and then getting in touch with Jacek and KT at SLAC to see if they want to collaborate.
Thoughts?
The text was updated successfully, but these errors were encountered:
This is one way to go certainly. I am not familiar with it myself. Another way is to do it all in MPI with python, at least to start. I can help with this way and it might make things easier for us in terms of changing things was we go.
While we discuss the details of the inference in #23 , I think it's clear that to predict weak lensing data and evaluate our log likelihood we will need to be computing the lensing effects (gamma, kappa) of ~100s of halos on each source, for 10^5 sources per sq deg in under a second (to keep our lives sane). I started wondering about fast methods to do this computation in parallel like Map/Reduce - my limited understanding from half an hour's reading is that some databases may enable this automatically/by design.
So, I'm thinking that we are likely to want to do this project with some sort of actual database, preferably one sitting on top of thousands of CPUs!. Since the importance sampling could, I think, be a very general use case of an LSST object catalog that contains interim posterior samples of things like galaxy shape and galaxy photo-z, I'm thinking about boiling this problem down to its simplest possible pseudocode and plain python/everything in memory demo, and then getting in touch with Jacek and KT at SLAC to see if they want to collaborate.
Thoughts?
The text was updated successfully, but these errors were encountered: