-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
glusterd: avoid starting the same brick twice #4088
base: devel
Are you sure you want to change the base?
Conversation
/run regression |
CLANG-FORMAT FAILURE: index 91606dd40..a864a6cd2 100644
--- a/xlators/mgmt/glusterd/src/glusterd-handler.c
+++ b/xlators/mgmt/glusterd/src/glusterd-handler.c
@@ -6375,8 +6375,8 @@ __glusterd_brick_rpc_notify(struct rpc_clnt *rpc, void *mydata,
}
gf_msg(this->name, GF_LOG_INFO, 0, GD_MSG_BRICK_DISCONNECTED,
- "Brick %s:%s has disconnected from glusterd.",
- brickinfo->hostname, brickinfo->path);
+ "Brick %s:%s has disconnected from glusterd.",
+ brickinfo->hostname, brickinfo->path);
ret = get_volinfo_from_brickid(brickid, &volinfo);
if (ret) {
@@ -6385,8 +6385,7 @@ __glusterd_brick_rpc_notify(struct rpc_clnt *rpc, void *mydata,
goto out;
}
gf_event(EVENT_BRICK_DISCONNECTED, "peer=%s;volume=%s;brick=%s",
- brickinfo->hostname, volinfo->volname,
- brickinfo->path);
+ brickinfo->hostname, volinfo->volname, brickinfo->path);
/* In case of an abrupt shutdown of a brick PMAP_SIGNOUT
* event is not received by glusterd which can lead to a
* stale port entry in glusterd, so forcibly clean up
@@ -6405,7 +6404,8 @@ __glusterd_brick_rpc_notify(struct rpc_clnt *rpc, void *mydata,
gf_msg(this->name, GF_LOG_WARNING,
GD_MSG_PMAP_REGISTRY_REMOVE_FAIL, 0,
"Failed to remove pmap registry for port %d for "
- "brick %s", brickinfo->port, brickinfo->path);
+ "brick %s",
+ brickinfo->port, brickinfo->path);
ret = 0;
}
} |
There was a race in glusterd code that could cause that two threads start the same brick at the same time. One of the bricks will fail because it will detect the other brick running. Depending on which brick fails, glusterd will report a start failure and mark the brick as stopped even if it's running. The problem is caused by an attempt to connect to a brick that's being started by another thread. If the brick is not fully initialized, it will refuse all connection attempts. When this happens, glusterd receives a disconnection notification, which forcibly marks the brick as stopped. Now, if another attempt to start the same brick happens, it will believe that the brick is stopped and it will start it again. If this happens very soon after the first start attempt, the checks done to see if the brick is already running will still fail, triggering the start of the brick process again. One of the bricks will fail to initialize and will report an error. If the failed one is processed by glusterd in the second place, the brick will be marked as stopped, even though the process is actually there and working. Fixes: gluster#4080 Signed-off-by: Xavi Hernandez <[email protected]>
/run regression |
1 test(s) failed 0 test(s) generated core 5 test(s) needed retry 1 flaky test(s) marked as success even though they failed |
|
||
ret = get_volinfo_from_brickid(brickid, &volinfo); | ||
if (!glusterd_is_brick_started(brickinfo)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we not need to set start_triggered to false also if brick is not started?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we not need to set start_triggered to false also if brick is not started?
No. Setting start_triggered
to false is precisely what causes that glusterd tries to start the brick twice.
If the brick is still starting here, it means that someone else is managing it, so it's better to not touch anything and let the other thread to adjust the state and flags as necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The flag was introduced by the patch (https://review.gluster.org/#/c/glusterfs/+/18577/) and the patch was specific to brick_mux environment though it is applicable everywhere. The change would not be easy to validate. The purpose of this flag is to indicate brick_start has been triggered and it continues to be true until a brick has been disconnect so if we would not reset it the brick would not start. We can get the race scenario in the case of brick_mux only while continuous run brick stop/start in a loop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO the whole start/stop logic is unnecessarily complex. However it's very hard to modify it now. The main problem here is that any attempt to connect to the brick while it's still starting will fail, so current code marks the brick as down, while actually it's still starting and most probably it will start successfully. So I think that marking it as stopped and clearing the start_triggered flag is incorrect (basically this causes that another attempt to start it from another thread creates a new process).
However, after looking again at the code, it seems that we can start bricks in an asynchronous mode (without actually waiting for the process start up) and there's no callback in case of failure. This means that no one will check if the process actually started or not to mark the brick as stopped in case of error. Even worse, just after starting a brick asynchronously, a connection attempt is done, which may easily fail under some conditions (I can hit this issue almost 100% of the time by running some tests on a zram disk).
How would you solve this issue ? I guess that making all brick starts synchronous is not an option, right ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes that would be a good idea start a brick asynchronously the challenge is how to make sure the brick has started successfully to establish a connection with glusterd.
/run s390-regression |
Thank you for your contributions. |
There was a race in glusterd code that could cause that two threads start the same brick at the same time. One of the bricks will fail because it will detect the other brick running. Depending on which brick fails, glusterd will report a start failure and mark the brick as stopped even if it's running.
The problem is caused by an attempt to connect to a brick that's being started by another thread. If the brick is not fully initialized, it will refuse all connection attempts. When this happens, glusterd receives a disconnection notification, which forcibly marks the brick as stopped.
Now, if another attempt to start the same brick happens, it will believe that the brick is stopped and it will start it again. If this happens very soon after the first start attempt, the checks done to see if the brick is already running will still fail, triggering the start of the brick process again. One of the bricks will fail to initialize and will report an error. If the failed one is processed by glusterd in the second place, the brick will be marked as stopped, even though the process is actually there and working.
Fixes: #4080