Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GlusterFS Server Volume Plugin #4

Open
trajano opened this issue Apr 20, 2018 · 7 comments
Open

GlusterFS Server Volume Plugin #4

trajano opened this issue Apr 20, 2018 · 7 comments
Assignees
Labels
enhancement New feature or request

Comments

@trajano
Copy link
Owner

trajano commented Apr 20, 2018

The glusterfs-volume-plugin wraps a GlusterFS-Fuse client to connect to a GlusterFS server cluster. Instead of having the managed plugin just be a client use it as the actual GlusterFS server.

This is to track down the concept and high level architecture to see the feasibility of such an endeavor.

The objective of the plugin is to abstract away the brick and volume management while inside the swarm. so there will be limitations to simplify the usage:

  • There is only one glusterfs pool for the swarm
  • There is only one gluster volume
  • All the bricks that are allocated will be given to the GlusterFS volume.
  • There is only one instance of the plugin per node and will use host networking exposing the GlusterFS ports to the rest of the network
@trajano trajano added the enhancement New feature or request label Apr 20, 2018
@trajano trajano self-assigned this Apr 20, 2018
@trajano
Copy link
Owner Author

trajano commented Apr 20, 2018

Conceptually the plugin would be configured to run on a host and have access to the whole /dev/ tree of the host. The host is still expected to provide the physical volumes for the gluster cluster.

Configuration wise it would just be a list of physical XFS formatted logical volumes (aka bricks) available on the node on it's /dev tree. These would be mounted internally in the plugin rootfs

The "seed node" of the pool will have an extra configuration to indicate that it is a seed node. The seed node will will be responsible for creating the gluster volume and manage the bricks. There can be only one seed node in the swarm which poses a single point of failure, so the responsibility should be limited.

The seed node is it will monitor system events specifically for any node create/update/remove event.

For any new node it will inspect the node and check if it has the glusterfs server volume plugin enabled. If so it will add it to the pool using the IP address in $.Status.Addr . Once added all the bricks that are defined in the other node will be added to the volumes.

Remove events will remove the bricks from the volumes and remove it from the pool.

docker volume create will invoke the gluster volume create to create the volume with a given volume type spanning all the bricks that are available in the swarm.

@trajano
Copy link
Owner Author

trajano commented Apr 20, 2018

If there are multiple seed nodes detected then pool and brick registrations are stopped.

@trajano
Copy link
Owner Author

trajano commented Apr 20, 2018

Since Docker only allow updates to Config labels, the labels will contain the actual configuration data keyed against the value of SEED which is the config name.

@trajano
Copy link
Owner Author

trajano commented Apr 20, 2018

The plugin would need to access the /var/run/docker.sock file in order to communicate with the engine.

@ruanbekker
Copy link

Great Plugin! I've been looking forward to a glusterfs plugin for docker for some time now 😄 .

Is the plugin supported for swarm? I've setup a replicated storage pool over 3 nodes on glusterfs. Then installing/running the plugin on all 3 nodes:

docker plugin install --alias glusterfs trajano/glusterfs-volume-plugin --grant-all-permissions --disable
docker plugin set glusterfs SERVERS=10.21.165.19,10.21.45.141,10.21.248.137
docker plugin enable glusterfs

But then when I create the volume, It seems like it does not replicate to the other nodes / or other nodes are aware of the volume?

node-a) docker volume create --driver glusterfs testvol
node-b) docker volume ls | grep testvol | wc -l
0

Or perhaps I'm doing something wrong?

@trajano
Copy link
Owner Author

trajano commented Mar 4, 2019

It works in a swarm but you need to install the plugin on every node. That's a limitation of Docker at the moment.

@ruanbekker
Copy link

Thanks a lot @trajano for the feedback. I'm not sure why I expected the volume should be on every node. (from volume ls) Tried it again, and notice as soon as the task with the configuration spawns up on a node the volume is visible from that node and everything works 100%

Absolutely awesome! I thank you for developing this! :D

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants