Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IPC: server: avoid temporary channel priority loss, up to deadlock-worth #352

Merged

Conversation

jnpkrn
Copy link
Contributor

@jnpkrn jnpkrn commented May 16, 2019

It turns out that while 7f56f58 allowed for less blocking (thus
throughput increasing) initial handling of connections from clients
within the abstract (out-of-libqb managed) event loop, it unfortunately
subscribes itself back to such polling mechanism for UNIX-socket-check
with a default priority, which can be lower than desired (via explicit
qb_ipcs_request_rate_limit() configuration) for particular channel
(amongst attention-competing siblings in the pool, the term here
refers to associated communication, that is, both server and
on-server abstraction for particular clients).

On top of that, it violates the natural assumption that once (single
threaded, which is imposed by libqb, at least between initial accept()
and after-said-UNIX-socket-check) server accepts the connection, it
shall rather take care of serving it (at least within stated initial
scope of client connection life cycle) rather than be rushing to accept
new ones -- which is exactly what used to happen previously once the
library user set the effectively priority in the abstract poll
above the default one.

It's conceivable, just as with the former case of attention-competing
siblings with higher priority whereby they could infinitely live on
at the expense of starving the client in the initial handling phase
(authentication) despite the library user's as-high-as-siblings
intention (for using the default priority for that unconditionally
instead, which we address here), the dead lock is imminent also in
this latter accept-to-client-authentication-handling case as well
if there's an unlimited fast-paced arrival queue (well, limited
by with number of allowable open descriptors within the system,
but for the Linux built-in maximum of 1M, there may be no practical
difference, at least for time-sensitive applications).

The only hope then is that such dead-locks are rather theoretical,
since a "spontaneous" constant stream of either communication on
unrelated, higher-prio sibling channels, or of new connection arrivals
can as well testify the poor design of the libqb's IPC application.
That being said, unconditional default priority in the isolated
context of initial server-side client authentication is clearly
a bug, but such application shall apply appropriate rate-limiting
measures (exactly on priority basis) to handle unexpected flux
nonetheless.

jnpkrn added a commit to jnpkrn/pacemaker that referenced this pull request May 16, 2019
In the high-load (or high-rate-config-change) scenarios,
pacemaker-fenced would be unable to provide service when basically DoS'd
with CIB update notifications.  Try to reconcile that with elevated
priority of the server's proper listening interface in the mainloop, at
worst, it will try to fence with slightly outdated config, but appears
to be less bad than not carrying the execution at all, for instance.
Other daemons might be considered as well.

Prerequisites:
- ClusterLabs/libqb#352

How to verify:
- mocked/based -N (see commit adding that module to mocked based daemon)
@jnpkrn
Copy link
Contributor Author

jnpkrn commented May 16, 2019

Not sure why kronosnet CI is not passing on FreeBSD 12 (ipc.test).
It worked for me without a glitch with 12.0-RELEASE r341666 in VM.

jnpkrn added a commit to jnpkrn/pacemaker that referenced this pull request May 17, 2019
In the high-load (or high-rate-config-change) scenarios,
pacemaker-fenced would be unable to provide service when basically DoS'd
with CIB update notifications.  Try to reconcile that with elevated
priority of the server's proper listening interface in the mainloop, at
worst, it will try to fence with slightly outdated config, but appears
to be less bad than not carrying the execution at all, for instance.
Other daemons might be considered as well.

Prerequisites:
- ClusterLabs/libqb#352

How to verify:
- mocked/based -N (see commit adding that module to mocked based daemon)
jnpkrn added a commit to jnpkrn/pacemaker that referenced this pull request May 17, 2019
In the high-load (or high-rate-config-change) scenarios,
pacemaker-fenced would be unable to provide service when basically DoS'd
with CIB update notifications.  Try to reconcile that with elevated
priority of the server's proper listening interface in the mainloop, at
worst, it will try to fence with slightly outdated config, but appears
to be less bad than not carrying the execution at all, for instance.
Other daemons might be considered as well.

Prerequisites:
- ClusterLabs/libqb#352

How to verify:
- mocked/based -N (see commit adding that module to mocked based daemon)
jnpkrn added a commit to jnpkrn/pacemaker that referenced this pull request May 17, 2019
In the high-load (or high-rate-config-change) scenarios,
pacemaker-fenced would be unable to provide service when basically DoS'd
with CIB update notifications.  Try to reconcile that with elevated
priority of the server's proper listening interface in the mainloop, at
worst, it will try to fence with slightly outdated config, but appears
to be less bad than not carrying the execution at all, for instance.
Other daemons might be considered as well.

Prerequisites:
- ClusterLabs/libqb#352

How to verify:
- mocked/based -N (see commit adding that module to mocked based daemon)
jnpkrn added a commit to jnpkrn/pacemaker that referenced this pull request May 17, 2019
In the high-load (or high-rate-config-change) scenarios,
pacemaker-fenced would be unable to provide service when basically DoS'd
with CIB update notifications.  Try to reconcile that with elevated
priority of the server's proper listening interface in the mainloop, at
worst, it will try to fence with slightly outdated config, but appears
to be less bad than not carrying the execution at all, for instance.
Other daemons might be considered as well.

Prerequisites:
- ClusterLabs/libqb#352

How to verify:
- mocked/based -N (see commit adding that module to mocked based daemon)
@jnpkrn jnpkrn force-pushed the ipc-server-temporary-channel-priority-loss branch 3 times, most recently from 58a8d86 to 2e573d9 Compare May 22, 2019 20:30
@jnpkrn
Copy link
Contributor Author

jnpkrn commented May 22, 2019

It turns out that the bug is effectively unspottable with libqb's na(t)ive
event loop implementation (qbloop.h). Naive, for it doesn't even
implement priority levels properly[*], making broken SW behave as if
everything was alright :-/

[*] actually it's too harsh, since it's unstated whether this implementation
is this forgiving on purpose (although for soft real-time purposes, it'd
be stretch to call this a desired property), and it does some sort of
(again naive) prioritity balancing

Do not merge yet, the new unit test currently errs on the safe side, marking
a failure even after the fix -- need to find some reliable/reproducible
balance.

@jnpkrn jnpkrn force-pushed the ipc-server-temporary-channel-priority-loss branch from 2e573d9 to e9339be Compare May 22, 2019 20:42
jnpkrn added a commit to jnpkrn/pacemaker that referenced this pull request May 23, 2019
In the high-load (or high-rate-config-change) scenarios,
pacemaker-fenced would be unable to provide service when basically DoS'd
with CIB update notifications.  Try to reconcile that with elevated
priority of the server's proper listening interface in the mainloop, at
worst, it will try to fence with slightly outdated config, but appears
to be less bad than not carrying the execution at all, for instance.
Other daemons might be considered as well.

Prerequisites:
- ClusterLabs/libqb#352

How to verify:
- mocked/based -N (see commit adding that module to mocked based daemon)
@jnpkrn jnpkrn closed this May 24, 2019
@jnpkrn jnpkrn force-pushed the ipc-server-temporary-channel-priority-loss branch from e9339be to d90d45f Compare May 24, 2019 20:33
@jnpkrn jnpkrn reopened this May 24, 2019
@jnpkrn
Copy link
Contributor Author

jnpkrn commented May 24, 2019

Honestly, this was the most painful unit test ever for me.

@chrissie-c if you are OK with this, I'd prefer pushing it myself.

@jnpkrn jnpkrn force-pushed the ipc-server-temporary-channel-priority-loss branch from 58f7a27 to cf3d6e0 Compare May 24, 2019 20:38
jnpkrn added a commit to jnpkrn/pacemaker that referenced this pull request May 24, 2019
In the high-load (or high-rate-config-change) scenarios,
pacemaker-fenced would be unable to provide service when basically DoS'd
with CIB update notifications.  Try to reconcile that with elevated
priority of the server's proper listening interface in the mainloop, at
worst, it will try to fence with slightly outdated config, but appears
to be less bad than not carrying the execution at all, for instance.
Other daemons might be considered as well.

Prerequisites:
- ClusterLabs/libqb#352
  (libqb used to contain a bug due to which one particular step in the
  initial-client-connection-accepting-at-the-server procedure that would
  be carried out with hard-coded (and hence possibly lower than competing
  events') priority, which backfires exactly in this case (once the
  pacemaker part is fixed -- by the means of elevating priority for
  the API end-point of fenced so that it won't get consistently
  overridden with a non-socket-based event source/trigger)

How to verify:
- mocked/based -N (see commit adding that module to mocked based daemon)
* priorities, these are rather advisory, however. For the true real-world
* systems with requirements of deterministic responses, you may be better
* served with these priorities strictly separated and abided, which is the
* case, e.g., with GLib. On the other hand, for early stages and timing
Copy link
Member

@jfriesse jfriesse May 28, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't really have too high opinion about libqb loop, but it may be helpful to be a little more specific. What do you mean by "... are rather advisory"? Why it doesn't fulfill "deterministic" guarantees (and maybe a small note why Glib does so?)? Also are you sure that your advice is really true for all (or at least most of the) "real-world" systems (Do you have any "hard" data)?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The wording there is hard to understand for me, it's not in idiomatic modern English. I'll work out a better phrasing and let you know what I come up with.

@jnpkrn
Copy link
Contributor Author

jnpkrn commented May 28, 2019 via email

@jnpkrn jnpkrn force-pushed the ipc-server-temporary-channel-priority-loss branch from cf3d6e0 to 5233bea Compare May 28, 2019 18:40
@jnpkrn
Copy link
Contributor Author

jnpkrn commented May 28, 2019

Reworded and extended the commentary in qbloop.h, fixing a typo along.
Honestly, I have no idea to what extent the non-pollable sources in
GLib's mainloop are subject to priority-based serialization, so narrowed
the stated example down just to that. Also added (currently extraneous)
prerequisite on GLib, so that those new tests won't get accidentally
avoided.

@jnpkrn jnpkrn force-pushed the ipc-server-temporary-channel-priority-loss branch from 5233bea to 9aa5165 Compare May 28, 2019 18:50
@jfriesse
Copy link
Member

jfriesse commented May 29, 2019

On 28/05/19 02:03 -0700, Jan Friesse wrote: jfriesse commented on this pull request. I don't really have too high opinion about libqb loop, but it may be helpful to be a little more specific. What do you mean by "... are rather advisory"?
It's basically a hint rather than strict priority ordering abided when handling order of the concurrent events is to be decided. AFAIK, for the three priorities available, the ratio of serving them is 3:2:1 -- it's just a probabilistic method of dealing with priorities, which is rather atypical, and makes (high-rate)

Ok, but is it necessarily bad?

high-prio batch event stream possibly blocked due to slow to handle (problem of the application, but nevertheless...) low prio events.

Yep. So why not to describe it and let user to choose? Maybe this behavior is better fit?

Why it doesn't fulfill "deterministic" guarantees (and maybe a small note why Glib does so?)?
Deterministic ("absolutistic") as opposed to probabilistic (inherently nondeterministic), see above. If you want to take me too literally, of course, when there's the only event source (at possessions of both differently prioritized channels) for the event handler, it's still deterministic, since this source has a full control of the requests (until they are running fully asynchronously), but it's gone when there are independent sources.

Current loop is as deterministic as glib implementation. It's just different behavior.

Also are you sure that your advice is really true for all (or at least most of the) "real-world" systems?
I am actually not aware of systems where probabilistic approach to priorities would get used (apart

So no hard data

from libqb). Hand-written loop management is also typically based on implicit, strict priorities (statically

Do you have any hard data?

hard-coded order of handling the particular events). The experience (see the unit test) tells that GLib behaves in what I refer to as deterministic manner. That's somewhat more intuitive, yet more demanding

It's easier to describe glib behavior, but I wouldn't call it more deterministic

when it comes to the correct usage (requiring starvation/deadlock-avoiding design considerations). It's basically "process niceness vs. real-time scheduling" analogy. Libqb favours the former (without

Kind of right analogy and maybe nice to have it in description,.

declaring that clearly, until the commit in question), GLib et al. the latter. Any suggestions how to word this better while still guiding the libqb users towards correct expectations welcome.

Because chrissie is on it I will not add my description but idea is to really describe how current implementation works and what may be drawbacks. That's it. No need to describe feeling and/or give advice without any real evidence.


-- Jan (Poki)

@chrissie-c
Copy link
Contributor

My idea for working is much simpler and to the point.

"The priorities passed to the libqb loop functions are advisory and provide no guarantees as to the order the callbacks will be invoked."

I don't think we need to say any more than that. Offering comparisons with other libraries just runs the risk of things getting out of date.

@jfriesse
Copy link
Member

jfriesse commented May 29, 2019

@chrissie-c Yep, this short description serve probably as well as more in-dept description of current implementation, so ACK.

@jnpkrn
Copy link
Contributor Author

jnpkrn commented May 29, 2019 via email

@jnpkrn jnpkrn force-pushed the ipc-server-temporary-channel-priority-loss branch 2 times, most recently from 17be286 to 9606a27 Compare May 29, 2019 15:07
@jnpkrn jnpkrn force-pushed the ipc-server-temporary-channel-priority-loss branch from 1f409dd to b05828e Compare June 4, 2019 11:09
It turns out that while 7f56f58 allowed for less blocking (thus
throughput increasing) initial handling of connections from clients
within the abstract (out-of-libqb managed) event loop, it unfortunately
subscribes itself back to such polling mechanism for UNIX-socket-check
with a default priority, which can be lower than desired (via explicit
qb_ipcs_request_rate_limit() configuration) for particular channel
(amongst attention-competing siblings in the pool, the term here
refers to associated communication, that is, both server and
on-server abstraction for particular clients).  And priority-based
discrepancies are not forgiven in true priority abiding systems
(that is, unlikele with libqb's native event loop harness as detailed
in the previous commit, for which this would be soft-torelated hence
the problem would not be spotted in the first place -- but that's
expliicitly excluded from further discussion).

On top of that, it violates the natural assumption that once (single
threaded, which is imposed by libqb, at least between initial accept()
and after-said-UNIX-socket-check) server accepts the connection, it
shall rather take care of serving it (at least within stated initial
scope of client connection life cycle) rather than be rushing to accept
new ones -- which is exactly what used to happen previously once the
library user set the effectively priority in the abstract poll
above the default one.

It's conceivable, just as with the former case of attention-competing
siblings with higher priority whereby they could _infinitely_ live on
at the expense of starving the client in the initial handling phase
(authentication) despite the library user's as-high-as-siblings
intention (for using the default priority for that unconditionally
instead, which we address here), the dead lock is imminent also in
this latter accept-to-client-authentication-handling case as well
if there's an _unlimited_ fast-paced arrival queue (well, limited
by with number of allowable open descriptors within the system,
but for the Linux built-in maximum of 1M, there may be no practical
difference, at least for time-sensitive applications).

The only hope then is that such dead-locks are rather theoretical,
since a "spontaneous" constant stream of either communication on
unrelated, higher-prio sibling channels, or of new connection arrivals
can as well testify the poor design of the libqb's IPC application.
That being said, unconditional default priority in the isolated
context of initial server-side client authentication is clearly
a bug, but such application shall apply appropriate rate-limiting
measures (exactly on priority basis) to handle unexpected flux
nonetheless.

The fix makes test_ipc_dispatch_*_glib_prio_deadlock_provoke tests pass.

Signed-off-by: Jan Pokorný <[email protected]>
It's misleading towards a random code observer, at least,
hiding the fact that what failed is actually the queing up
of some handling to perform asynchronously in the future,
rather than invoking it synchronously right away.

Signed-off-by: Jan Pokorný <[email protected]>
@jnpkrn jnpkrn force-pushed the ipc-server-temporary-channel-priority-loss branch from b05828e to bec6285 Compare June 5, 2019 09:31
@jnpkrn
Copy link
Contributor Author

jnpkrn commented Jun 5, 2019

Regarding "test_ipc_dispatch_shm_native_prio_deadlock_provoke"
failing, I've seen that as well, but it went away after the "fix",
I think I'll take a closer look.

Fixed, looks good to me now.

jnpkrn added a commit to jnpkrn/pacemaker that referenced this pull request Jun 5, 2019
In the high-load (or high-rate-config-change) scenarios,
pacemaker-fenced would be unable to provide service when basically DoS'd
with CIB update notifications.  Try to reconcile that with elevated
priority of the server's proper listening interface in the mainloop, at
worst, it will try to fence with slightly outdated config, but appears
to be less bad than not carrying the execution at all, for instance.
Other daemons might be considered as well.

Prerequisites:
- ClusterLabs/libqb#352
  (libqb used to contain a bug due to which one particular step in the
  initial-client-connection-accepting-at-the-server procedure that would
  be carried out with hard-coded (and hence possibly lower than competing
  events') priority, which backfires exactly in this case (once the
  pacemaker part is fixed -- by the means of elevating priority for
  the API end-point of fenced so that it won't get consistently
  overridden with a non-socket-based event source/trigger)

How to verify:
- mocked/based -N (see commit adding that module to mocked based daemon)
@chrissie-c
Copy link
Contributor

Can we have Ken's text in the comment please?

jnpkrn added a commit to jnpkrn/pacemaker that referenced this pull request Jun 6, 2019
In the high-load (or high-rate-config-change) scenarios,
pacemaker-fenced would be unable to provide service when basically DoS'd
with CIB update notifications.  Try to reconcile that with elevated
priority of the server's proper listening interface in the mainloop, at
worst, it will try to fence with slightly outdated config, but appears
to be less bad than not carrying the execution at all, for instance.
Other daemons might be considered as well.

Prerequisites:
- ClusterLabs/libqb#352
  (libqb used to contain a bug due to which one particular step in the
  initial-client-connection-accepting-at-the-server procedure that would
  be carried out with hard-coded (and hence possibly lower than competing
  events') priority, which backfires exactly in this case (once the
  pacemaker part is fixed -- by the means of elevating priority for
  the API end-point of fenced so that it won't get consistently
  overridden with a non-socket-based event source/trigger)

How to verify:
- mocked/based -N (see commit adding that module to mocked based daemon)
jnpkrn added a commit to jnpkrn/pacemaker that referenced this pull request Jun 6, 2019
In the high-load (or high-rate-config-change) scenarios,
pacemaker-fenced would be unable to provide service when basically DoS'd
with CIB update notifications.  Try to reconcile that with elevated
priority of the server's proper listening interface in the mainloop, at
worst, it will try to fence with slightly outdated config, but appears
to be less bad than not carrying the execution at all, for instance.
Other daemons might be considered as well.

Prerequisites:
- ClusterLabs/libqb#352
  (libqb used to contain a bug due to which one particular step in the
  initial-client-connection-accepting-at-the-server procedure that would
  be carried out with hard-coded (and hence possibly lower than competing
  events') priority, which backfires exactly in this case (once the
  pacemaker part is fixed -- by the means of elevating priority for
  the API end-point of fenced so that it won't get consistently
  overridden with a non-socket-based event source/trigger)

How to verify:
- mocked/based -N (see commit adding that module to mocked based daemon)
@jnpkrn
Copy link
Contributor Author

jnpkrn commented Jun 6, 2019

They are not representative as to the tricky nature I'd like to point
out -- and which likely caused the problem I fixed in the first place.

Is the current form a blocker in any place? If so, in which in
particular?

Point of arbiters is to judge fairly, not with the hive mind of the
clique they happen to be in. I'd appreciate if that was the case.

Apparently, the discussed matter was rather interesting for others to
learn about (thanks @wferi for your attendance here!), and I think the
current form discusses the possible shortcomings in about the right
depth to the random comer to the libqb's event loop.

Copy link
Contributor

@kgaillot kgaillot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Point of arbiters is to judge fairly, not with the hive mind of the
clique they happen to be in. I'd appreciate if that was the case.

You're in luck, it is.

When several people agree on a perspective different from yours, you should at least consider the possibility that maybe they're not irrational colluders trying to defeat progress, and instead that just maybe your submission is not accomplishing its intended goal.

Apparently, the discussed matter was rather interesting for others to
learn about (thanks @wferi for your attendance here!), and I think the
current form discusses the possible shortcomings in about the right
depth to the random comer to the libqb's event loop.

I agree that some elaboration of what the priorities mean and how they interact would be useful. I disagree that you're successfully accomplishing that here, or that such a lengthy discussion belongs in the API documentation.

include/qb/qbloop.h Outdated Show resolved Hide resolved
include/qb/qbloop.h Outdated Show resolved Hide resolved
include/qb/qbloop.h Outdated Show resolved Hide resolved
include/qb/qbloop.h Outdated Show resolved Hide resolved
include/qb/qbloop.h Outdated Show resolved Hide resolved
include/qb/qbloop.h Outdated Show resolved Hide resolved
@jnpkrn jnpkrn closed this Jun 7, 2019
@jnpkrn jnpkrn force-pushed the ipc-server-temporary-channel-priority-loss branch from bec6285 to d90d45f Compare June 7, 2019 14:47
@jnpkrn jnpkrn reopened this Jun 7, 2019
@jnpkrn
Copy link
Contributor Author

jnpkrn commented Jun 7, 2019

Tried to rehash and re-allocate the commentary to follow the suggestion
for less wordiness and more practical usability. Also put the more, IMHO,
correct (out of all range of arguments in the discussion) stress on why
some implementors may come short and will need to use another event
loop implementation ... CS [therefore IT, when done right] is in essence
based on problem reducibility, and more powerful means are more likely
to be easily reducible to weaker power requiring solutions, but not the
other way around (see, for instance, it took ages for yum to adopt the
proper solution to the "problem of satisfiability", for which there are
stock solutions, called SAT solvers -- all the struggle prior to that
could be avoided if this problem reducibility gist was duly followed).

Make the qbipcs.h module interdependence clear (also shedding light to
some semantic dependencies) as well.

Signed-off-by: Jan Pokorný <[email protected]>
We want to run every and each test we can, without reliance on
transitive deoendencies and environment "invariants".

Signed-off-by: Jan Pokorný <[email protected]>
@jnpkrn jnpkrn force-pushed the ipc-server-temporary-channel-priority-loss branch from 0ef6302 to b46a574 Compare June 7, 2019 15:38
@jnpkrn
Copy link
Contributor Author

jnpkrn commented Jun 7, 2019

(typo fix)

@chrissie-c
Copy link
Contributor

TBH I'm unhappy you pulled this change without my approval. I'm still not happy with the comment in qbloop.h and I really don't see the need for the glib code in the test suite. Yes, there are a lot of good things in the check_ipc.c patch here (for which thank you), but they really should have gone in a separate PR rather than bundled with this one.

@jnpkrn
Copy link
Contributor Author

jnpkrn commented Jun 12, 2019 via email

@chrissie-c
Copy link
Contributor

Ken is not libqb maintainer, much as I respect him. I am.

While I'm all in favour of good unit tests - of course. The ones we have are already annoyingly fragile, adding more complexity is not helpful. Especially adding a comparison with a different library is just asking for unexpected failures.

@jnpkrn
Copy link
Contributor Author

jnpkrn commented Jun 13, 2019

[sorry, my former response was in part based on misinterpreting
check_ipc.c as qbloop.h]

Re complexity:

That's why one of the commits introduced better traceability to which
role amongst multiple processes employed in the test is the source of
the message -- I've enjoyed much fun without that originally, which
let me to add that.

I see the risk factors, new failures stemming there go onto my shoulders.

But was reasonably confident that the cut wasn't that massive, and total
time trimming was partially a side effect of adding new synchronization
that was needed between a particular phase of execution of what I call
alphaclient (running its priority events as a neverending stream) and
a regular client (that's in GLib case demonstrably starved out with
the former). Since the mechanism was already devised, it was a low
hanging fruit to use it also for server-to-client synchronization,
for which purpose there was this sleep(1) formerly.

Frankly, I am constantly on the verge regarding complexity.
Best solution for that is to have rather clearly defined building blocks
without excessive stretch, and their frequent combination to achieve the
objectives, I think. While I increased the complexity, it was an
amortized amount not proportional to the number of addressed code
locations, exactly for that reusing, and that better traceability was
also meant to counter that effect. Next level, when it will seem
unbearable, will likely be to have an auxiliary module with
test-agnostic helpers (said building blocks) apart and well documented,
perhaps. I don't know of better approach, sadly, apart from the worst
stance -- having it rot away for fears to touch anything.

@chrissie-c
Copy link
Contributor

I obviously didn't make myself clear.

While the initial patch was great, and the general updates to check_ipc.c are welcomed, there are things I do NOT want in libqb and think you've overstepped the mark in committing them without approval.

  • The glib code has no place here. It's adding pointless complexity to the test suite for almost no perceivable value - it needs removing.
  • The comment in qbloop.h is borderline incomprehensible and needs fixing. Either remove it completely or replace it with my shortened suggestion: "The priorities passed to the libqb loop functions are advisory and provide no guarantees as to the order the callbacks will be invoked."

These comments also apply to your patch (also committed without my approval) to the version_1 branch.

Thank you.

@jnpkrn
Copy link
Contributor Author

jnpkrn commented Jun 17, 2019 via email

bmwiedemann pushed a commit to bmwiedemann/openSUSE that referenced this pull request Jan 15, 2022
https://build.opensuse.org/request/show/946348
by user yan_gao + dimstar_suse
- Retry if posix_fallocate is interrupted with EINTR (#453) (gh#ClusterLabs/libqb#451, bsc#1193737, bsc#1193912)
  * bsc#1193737-0001-Retry-if-posix_fallocate-is-interrupted-with-EINTR-4.patch

- IPC: server: avoid temporary channel priority loss, up to deadlock-worth (gh#ClusterLabs/libqb#352, rh#1718773, bsc#1188212) (forwarded request 946347 from yan_gao)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants