Published on

Gluster Storage in Kubernetes

Authors

On the new cluster that we are currently working on we noticed that one of our pods failed when we tried to hit one of its REST endpoints. After doing a bit of investigation we remembered that as a stop gap we used local storage for that pod. Kubernetes had to restart that pod and when it moved to another host its storage was lost.

Storage is one of the most painful things to manage on Kubernetes. It gets especially hairy if using one cloud provider and then switch to bare metal or another cloud provider. You can use NFS but that needs to be setup properly which is a pain in its own right.

A colleague on my team has been investigating using GlusterFS as the storage approach for our Kubernetes clusters. During the course of today we were bitten a number of times by storage issues. As a team we decided that it was time that we sort the issue out. We mobbed on this with the team member who investigated this driving. The initial setup so far was really much easier than I would have thought it would be. In a nutshell it involved the steps outlined here which need to be run on each host that you want to host Gluster storage on:

  1. Partition the storage volume into:
    • A standard Linux partition to be used for things like Docker to store its layers.
      • This is not managed by Gluster.
    • An xfs volume which will be used by Gluster.
    • This partitioning needs to be done on all hosts that will use what Gluster calls a brick to store its data in.
      • We only did this on the Kubernetes worker/slave nodes.
  2. Format the Gluster partition using the xfs format.
  3. Add an entry for the volume to fstab.
  4. Mount the volume.
  5. Install the Gluster CLI following the install guide.
  6. Decide on what replication and distribution strategy to use.
  7. Use commands similar to what was listed on the architecture page to create the volume against the gluster partition we created in step 1. gluster volume create test-volume server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4.
  8. Run Gluster on each of the hosts that you set it up on gluster volume start gv0 again as outlined in the configuration guide.

If you setup Gluster so it does full replication across bricks i.e. exact copies are stored on all bricks then to test that Gluster is working:

  1. Create a file/folder on one node.
  2. This should replicate to your other nodes almost immediately assuming you have set it up to replicate everything across all bricks.
  3. ls in the volume mount on one of the other nodes and you should see the same file there.

This was just day 1 of setting Gluster up. We will likely dig further into using this in the coming days.