Installing oVirt 3.1 and glusterfs using either NFS or Posix (Native) File System Volume Setup

Submitted by rmiddle on Tue, 07/03/2012 - 06:01

Gluster Volume Setup

Now that all the nodes are setup and added to the cluster we need to tweak a few settings create our 1st volume and add it to the data center. Before completing these steps please make sure you have gone though and confirm you have both the Engine and the Node are setup before contining.

  1. First thing we need to do is open the firewall and get the systems to talk to each other.
  2. We need to open some ports. There are several ways to do this I chose to edit /etc/sysconfig/iptables and add the following right about the line starting with # vdsm before any of the reject lines. # glusterfs -A INPUT -p tcp -m multiport --dport 24007:24047 -j ACCEPT -A INPUT -p tcp --dport 111 -j ACCEPT -A INPUT -p udp --dport 111 -j ACCEPT -A INPUT -p tcp -m multiport --dport 38465:38467 -j ACCEPT # vdsm
  3. Now restart iptables service iptables restart
  4. I prefer to create all my bricks under a folder called data it isn't required but all my examples will use it so this is a good place to create the folder. You can name it something else if you prefer. mkdir -p /data chown -R 36.36 /data
  5. Now repeat these steps on all of your nodes in your cluster.

Now we need to join all the nodes together under gluster.

  1. I have been told this will happen in a very narrow setup that I have never actually seen in the wild.
  2. To be safe we check from one of the nodes if any other nodes are connected. gluster peer status Result: No peers present
  3. Now add your peers in. gluster peer probe hostname Result: Probe successful
  4. Repeat the last step for each node in the cluster.
  5. Now lets confirm they are connected. gluster peer status Example Results: Number of Peers: 2Hostname: host2 Uuid: 3a1bd38c-2a54-47b2-86b9-5d56e3980a44 State: Peer in Cluster (Connected) Hostname: host3 Uuid: 7a2fd499-4db6-4810-87c9-b7c57aaebfb8 State: Peer in Cluster (Connected)

Now time to log back into the engine and create a volume.

  1. Log into the engine and select the tab marked Volumes
  2. Select Create Volume
  3. Name: Volume Name
  4. Type: *You choices are Distributed, Replicated, Striped, Distributed Striped, Distributed Replicated * You can read more about that on gluster.com. In my example I am going to use Distributed Replicated.
  5. It will auto open the add bricks box and allow you to add the bricks for the hosts in your system.
  6. Select the node and enter the folder you want the brick created in. In my example I used host1:/data/vol1, host2:/data/vol1, host1:/vol2, host3:/vol1
  7. Select Ok to save the brick list.
  8. Select Ok to save the create the cluster.
  9. Select the volume from the list.
  10. These setting might not be needed for glusterfs but they were set when I setup Posix (Native) FS. The 1st one is needed for NFS for certain. The other I am less sure but there were enabled when I got gluster working.
    1. Select the lower tab Volume Options
    2. Select Add then Select nfs.nlm set option off
    3. Select Add then Select nfs.register-with-portmap set option on
    4. Select Add then Select nfs.addr-namelookup set option off
    5. Right click on the volume and select start
  11. Now we need to chown all the gluster brick to the KVM user so they can be mounted. This must be done on all cluster nodes. Since I store my bricks under /data I just changed own the entire folder. chown -R 36.36 /data/

Warning if you active more then one host under gluster native it will rotate between the nodes.

See bugzilla report https://bugzilla.redhat.com/show_bug.cgi?id=835949 - [rhevm] PosixFS: contending storm when adding additional host to pool Been reported fixed but I haven't confirmed it either way. To test if you are seeing the problem move all nodes but one into mantaince if everything mounts fine they you are seeing this bug.

If you are using Posix Compatible (Native) FS

  1. Log into the engine and select the tab marked Storage
  2. Select New Domain
    1. Path: localhost:/volumename
    2. VFS Type: glusterfs
    3. Mount Options: vers=3
  3. Right Click on the new Storage Domain and select Activate

If you are using NFS

  1. Log into the engine and select the tab marked Storage
  2. Select New Domain
    1. Export Path: localhost:/volumename
  3. Right Click on the new Storage Domain and select Activate

Adding ISO and Export nfs shares.

  1. Select the tab marked Storage
  2. Select New Domain
    1. Domain Function / Storage Type: Export / NFS
      1. Export Path: enginehost:/nfs/export
    2. Right Click on the new Storage Domain and select Activate
    3. Now Select the ISO Domain
    4. Right Click on the new Storage Domain and select Attach
    5. Select the default datacenter
    6. Right Click on the new Storage Domain and select Activate
    You can now start to create Virtual Machines.