-
Notifications
You must be signed in to change notification settings - Fork 279
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Thin-Arbiter-Volumes.md has inaccurate and incomplete information. #621
Comments
@Sheetalpamecha @aspandey @itisravi @karthik-us can you please have a look when you get a chance... thanks. |
After some more digging, there is this: https://review.gluster.org/#/c/glusterfs/+/20056/ Which has a script to assist with configuring a service for the thin-arbiter process, as well as a template .vol file at glusterfs/extras/thin-arbiter/thin-arbiter/thin-arbiter.vol It appears that you can't run a thin-arbiter on a node that is also running glusterd by default. (I was trying to test out 1 node from cluster2 being a thin-arbiter for cluster1) A couple of things that I can't seem to find:
It looks like the arbiter for 8.3 is not the same op-version as glusterfs? # gluster peer probe arbiter
peer probe: failed: Peer arbiter does not support required op-version |
|
Thanks @itisravi. Is there a way to verify the status of the arbiter? If the arbiter is online, but unreachable from the cluster, how would I know? I don't really want to take a brick offline to see if the arbiter still allows writes to the other brick. Is there another way to verify the arbiter status? |
Ctl-] |
@amarts how does this verify that the arbiter is working?
If I look at logs for the arbiter, it seems as though it might not be working, and I don't see how telnetting to the arbiter verifies its operational status.
Even if you verify the service status:
It doesn't validate that the thin-arbiter is operational. |
To clarify; I'm thinking all of this information would be useful to have in Thin-Arbiter-Volumes.md. If you want me to take a stab at a pull request, let me know. I also haven't found an example for using setup-thin-arbiter.sh yet. I'm guessing something like |
It needs to be reachable only from the (fuse) clients and not the cluster. So if it is not connected to any of the bricks including the TA brick, the fuse mount logs will have messages like
Sure go ahead.
Slide 23 of |
Is it possible to remove a thin-arbiter brick?
|
Per that slide deck, support for add/replace brick are on the todo list yet. TODO
|
Yes @Sheetalpamecha is working on this via gluster/glusterfs#1528 |
I have fought last night with Thin Arbiter too. the MD file doesn't give ANY information about VOLUME_FILE, how to get this or create. Command to create volume with thin arbiter is still inaccurate. And Even If I did my best to configure Thin-Arbiter correctly, I have no idea how verify that it works or not. And because here is no way how to reconfigure volume (gluster/glusterfs#1528 is dead for now) then any fix in the future means get all data out form the volume and re-create it. I think that this part of documentation needs significant improvements... |
Finally I found a way how-to make GlusterFS Thin Arbiter up and running, Here is my How-To https://polach.me/posts/howto-setup-glusterfs-thin-arbiter-at-homelab/ Maybe it can save some time to others... |
The command 'glustercli' is used, which doesn't have any reference in docs until somewhere back in the changelog for v4.
The command to create a thin-arbiter volume file does not work. A file is not created, resulting in the following showing up in the logs.
Should be changed to:
The text was updated successfully, but these errors were encountered: