You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is like pypa/pip#9071, but in a more generic sense. I suppose that this may ties to the algorithm, but I don't think the current algo requires this to be done. In short, before doing any resolution work, the resolver tries to find all the matches for each of the initial requirements:
Now suppose I have [A, B>6], where the first match for A is version 42 and A 42 depends on B<9, and that Provider.find_matches([B>6]) and Provider.find_matches([B>6,B<9]) do totally different things (or that the result of the underlying expensive operations can't be shared), then Provider.find_matches([B>6]) is wasteful.
In pip's case, of course the majority of the time it's reaching to simple repositories, which has an index shared for all B, so with caching/memoization it's not a problem though.
The text was updated successfully, but these errors were encountered:
The initial round is implemented partially as a “fail fast” mechanism to report immediately if the user-supplied requirement set already contains conflicts. But the same can be achieved with appropriate ordering (e.g. pip’s current approach of prioritising resolution of user-requested packages), so indeed it’s possible maybe we can do away with that initial round. I have not thought too deep into the algorithm to figure out whether that’s correct or not, however. It’d probably be a good start to just remove that part of the code, and run the test suite with a tweaked get_preference() implementation.
👍 I'm currently integrating resolvelib into ansible-galaxy CLI and I've also noticed that find_matches happen earlier than the obvious conflicts are detected. So it does sound reasonable to do such an optimization.
One optimization (hack?) I see, on the caller side, is supplying a virtual requirement that depends on all the actual requirements.
This is like pypa/pip#9071, but in a more generic sense. I suppose that this may ties to the algorithm, but I don't think the current algo requires this to be done. In short, before doing any resolution work, the resolver tries to find all the matches for each of the initial requirements:
resolvelib/src/resolvelib/resolvers.py
Lines 280 to 287 in 73c6605
Now suppose I have
[A, B>6]
, where the first match for A is version 42 and A 42 depends onB<9
, and thatProvider.find_matches([B>6])
andProvider.find_matches([B>6,B<9])
do totally different things (or that the result of the underlying expensive operations can't be shared), thenProvider.find_matches([B>6])
is wasteful.In pip's case, of course the majority of the time it's reaching to simple repositories, which has an index shared for all B, so with caching/memoization it's not a problem though.
The text was updated successfully, but these errors were encountered: