-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incompatible Package Error when executing setup-meta-modules #1360
Comments
Can you add |
I'm not using an external openmpi; it is being built by spack-stack. Both instances in concretize (under jedi-ufs-env and nco) have the exact same specs (including hash) and only one openmpi is actually built. |
The error message above is suggesting that the specs don't match. What does |
|
I am not positive that this is the problem, but I see that the core compiler above is listed as gcc@11 - why? I think we hardcode the core compiler to be 4.6 because we don't want any of the compilers we use to be considered as the "core" compiler (because spack doesn't generate certain modules for that compiler). |
I didn't do anything different than in past stack builds, but I can tell you the gcc compiler was used in this intel stack to create the packages gcc-runtime and boost:
In looking at the old ticket referenced above, core compiler was indeed set to 4.6. |
Wondering then were this comes from (from your output above)
|
I'm not sure because
But then it is overridden in
That didn't happen in previous builds. I checked old environments and there is no mention of core_compilers in spack.yaml. I followed the same recipe notes for this latest version except that I had to manually edit However, I got this same error with a stack that had core_compilers set as expected (after moving aside one of the openmpi packages in #1126) |
Is this on a machine that I can access? I am having a hard time to see what is wrong. |
unfortunately not, but It's not a special machine and I can send you my exact procedure through email if that will help. Also, this is not a tagged release of spack-stack, it is the develop branch as of 10/28/24, which appears to have not changed since the 21st. I'm guessing it's one of the spack commands I use during my setup because I just initialized a bogus new environment and core_compilers is not yet in my spack.yaml file. |
I was able to get the core compiler thing fixed but the error still occurs. I added some print statements to
|
I see a pattern with all your issues, which is that spack somehow knows something about another environment when it shouldn't. Something must be different in your workflow, because this is definitely not the case in our setups. spack environments are 100% separate from each other. Have you inspected your |
I didn't see anything out of order in my As mentioned in #1387, I started from scratch after realizing the removal of the The only module path that still contained a hash was openmpi. I manually combined the two folders (moved Any Idea what could be causing the hash to be included in the openmpi path? Notice that the hash was not appended to the end of the module name but was in the middle of the path: Successful output from the intel environment:
|
I believe that the hash in the module hierarchy for lmod is something that spack does, and it shouldn't hurt (we updated the |
When you or other maintainers and installers build the stack do you get a hash in the module hierarchy for openmpi?something like:
In a quick scan of the meta_modules python code I see accounting for cases of name-version-hash in the module path but not name/version-hash. In looking at some of my older spack-stacks, it looks like the hash started getting added to my folders in |
Yes, for example the following:
This is a difference between
One thing I will point out - and that's very important - is that when you want to change the default module choice for a site config (e.g. change |
Yes, I mentioned that I made that change to On a side note, having the core_compiler set to the the compiler you're using has the nasty side effect of creating all the module files in |
Describe the bug
Building the devel branch of the jedi-ufs-env stack and nco (as of 10/28/2024) with Intel results in a "Package with matching name is incompatible" error when it tries to create the meta modules. This bug is very similar to #1126.
To Reproduce
Note: I have forced the use of Open MPI 4.1.6 by requesting it in site/packages.yaml to facilitate using TotalView (the 5.x line has removed the MPIR process acquisition interface required by TotalView).
Ask spack to build jedi-ufs-env and nco either with the command
spack add jedi-ufs-env%intel nco%intel
or by editing yourspack.yaml
. Conrcretize has opempi listed under both jedi-ufs-env and nco but only one openmpi is built. I can provide the concretize output if needed. The stack builds to completion andspack module lmod refresh
is successful (and creates only one openmpi module), butspack stack setup-meta-modules
errors:The stack is still usable, but no
stack-openmpi
orstack-python
modules are created. I suspect this is actually the reason why I got this same error shown in #1126 when I moved aside one of the openmpi builds. The stack in that issue was also jedi-ufs-env and nco.Expected behavior
spack stack setup-meta-modules
should complete without errors.System:
Linux Workstation running Rocky 9. Intel is 2023.0.0 (2021.8.0 legacy compilers (icc,icpc,ifort)) and GCC is 11.4.1 system package manager install.
Additional context
I suspect I might be able to work around this by simply adding a
depends_on("nco", type="run")
to thespack-ext/repos/spack-stack/packages/jedi-ufs-env/package.py
file but I have not tested this yet. A quick look atspack-ext/lib/jcsda-emc/spack-stack/stack/meta_modules.py
makes me think that any time an MPI package is referenced more than once, this error will occur even if they are, in fact, the exact same package.The text was updated successfully, but these errors were encountered: