Author Archives: ewan

Getting Gluster working with 2-node replication on CentOS

Gluster is a fantastic open-source clustering filesystem, allowing you to convert low-cost Linux servers into a single highly available storage array.

The project has recently launched the “Gluster Storage Platform”, which integrates the Gluster filesystem with an operating system and management layer, but if you want to add Gluster functionality to your existing servers without turning them into dedicated storage appliances, the documentation is a bit lacking.

In an attempt to help anyone else out there to get Gluster up and running replicating a directory between 2 servers in Gluster’s “RAID 1” mode.

First of all, download the latest version of Gluster 3 from their FTP site, I downloaded 3.0.0-1, you’ll need the following files assuming you’re running CentOS:

glusterfs-client-3.0.0-1.x86_64.rpm

glusterfs-common-3.0.0-1.x86_64.rpm

glusterfs-server-3.0.0-1.x86_64.rpm

Once you’ve downloaded the 3 files to somewhere on your first node, run:

yum install libibverbs

rpm -ivh glusterfs-client-3.0.0-1.x86_64.rpm glusterfs-common-3.0.0-1.x86_64.rpm glusterfs-server-3.0.0-1.x86_64.rpm

To install the Gluster software itself. Then copy the RPM files to your second node, and repeat the rpm installation.

You’ll need to decide a directory on each server to act as the datastore, so either pick an existing directory or more likely create a new one – In this case I’ve used “/home/export”. If it doesn’t already exist, run

mkdir /home/export

Assuming you’re using 2 nodes, next run this command on the first node to produce the Gluster configuration files, replacing the words node1ip and node2ip with the IP addresses or hostnames of the 2 nodes, and /home/export with your directory.

glusterfs-volgen —name store1 —raid 1 node1ip:/home/export node2ip:/home/export

This will create 4 files

booster.fstab

store1-tcp.vol

node1ip-store1-export.vol

node2ip-store1-export.vol

Of these, booster.fstab is used for auto-mounting filesystems after reboots, so isn’t needed yet. Copy the store1-tcp.vol and node2ip-store1-export.vol files to the second node.

On the first node, run

cp node1ip-store1-export.vol /etc/glusterfs/glusterfsd.vol

cp store1-tcp.vol /etc/glusterfs/glusterfs.vol

On the second node, run

cp node2ip-store1-export.vol /etc/glusterfs/glusterfsd.vol

cp store1-tcp.vol /etc/glusterfs/glusterfs.vol

At this point, you should be ready to start the gluster services on both nodes, and mount the filesystems.

You need somewhere on the node to mount the replicated filesystem, In this case we’re using “/mnt/export”

On each node, run

service glusterfsd start
mount -t glusterfs /etc/glusterfs/glusterfs.vol /mnt/export/

You should now have a working Gluster replication service between the 2 nodes, you can test this by running on the first node

echo “Gluster is working” > /mnt/export/fileA

and on the second node run

cat /mnt/export/fileA

Assuming everything is working ok, you’ll see a message of “Gluster if working” on your screen.

If you don’t get that, then take a look in the /var/log/glusterfs/ directory on both nodes to see what’s happening.

One thing I’ve noticed is that gluster’s output logs in /var/log/glusterfs often start with a – at the front, confusing a lot of Unix command line tools – if you refer to them using their full path including /var/log/glusterfs, you’ll have an easier time manipulating them.

Hopefully this will help people out there with the slightly confusing but comprehensive Gluster documentation where I recommend you go for any more in-depth configuration help.

Cloud computing price comparison stupidity

Microsoft have announced the pricing for their new Azure cloud-computing platform, and there’s been quite a few articles comparing the pricing to that of Amazon’s AWS cloud computing platform, the largest existing cloud provider.

Most have focussed on Microsoft charging 0.5 cents less per hour for a basic Windows instance than Amazon, 12 cents vs 12.5 cents, and whether they’ve done this to start a price war or simply to appear in-line with the existing suppliers out there.

However, these comparisons are just plain stupid, for one reason alone.

Each one provides a completely different definition of a CPU!

Amazon use the “EC2 Compute Unit”, they say that’s based on:

We use several benchmarks and tests to manage the consistency and predictability of the performance of an EC2 Compute Unit. One EC2 Compute Unit provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor

Microsoft haven’t published a definition of their equivalent CPU definition, but since Amazon haven’t published their exact benchmarks it’s bound to be different.

Again, Google have their App Engine service, where they define their CPU usage as:

CPU time is reported in “seconds,” which is equivalent to the number of CPU cycles that can be performed by a 1.2 GHz Intel x86 processor in that amount of time. The actual number of CPU cycles spent varies greatly depending on conditions internal to App Engine, so this number is adjusted for reporting purposes using this processor as a reference measurement.

The Google measurement is obviously fairly close to the vague Amazon definition of a Compute Unit, but neither of them clearly specify how they actually measure the usage, so any initial comparison is at best vague and at worst completely misleading.

The same is true of Rackspace’s Mosso cloud, and all the other cloud providers out there.

Until a standard CPU unit is defined publicly and agreed between the major suppliers (if that’s even possible), any comparisons between clouds based on a simple “CPU Time” measurement, are simply stupid.

Freeing up ESS disks when they are unavailable

Sometimes when an ESS vpath disk has previously been assigned to AIX and is now assigned to Windows, or vice versa, you aren’t able to access the disk even when it all looks correct. The first thing to check is whether the vpath still has a persistant reservation on it.

In AIX this is easy, run ‘lquerypr -vh /dev/vpathXX’ to see if a vpath has a persistant reservation, then ‘lquerypr -ch /dev/vpathXX’ to clear the reservation