-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MPI #106
Comments
👍 on the RasPi... I know this would be a large scope increase, but I think it would be a really good one. |
I have mixed feelings about this.
The MPI implementation remains an interest, but it's not a priority in the near term. |
Ha! I think we are actually in complete agreement on all your points. I am not actually serious about RPi... sorry if that caused any confusion. While it's one of those "fun challenge problems," I don't envision that it would ever be relevant for actual modeling and simulation. I am, however, curious about shared vs distributed memory and also those vs the hybrid model. That DOES have relevance. Though the results are likely specific to the mathematical approach (finite difference, explicit time stepping), it would definitely be a useful thing to know. Maybe MPI could go into HiPerC v2.0 or something... long term strategic vision. Obviously there's existing project development to perform first before increasing the scope. |
Setting it aside for HiPerC v2.0 is a pretty good idea: focus on shared memory (as the README states) for the first round of benchmarks, work hard to get the word out on that front and maybe incorporate more interesting numerical methods, and then do hybrid implementations with MPI when there's community demand for it, or a compelling usage case. |
An MPI implementation may be necessary, since MPI
Such an implementation would enable HiPerC benchmarks of
The text was updated successfully, but these errors were encountered: