Increasing our storage provision

During the summer we started getting tight on storage availability. It seems that usage on our home directory areas constantly increases – people never delete stuff (me included!). We were running most of our stuff through our Veritas Cluster from a pair of Sun 3511 arrays and a single 3510 array. Between them (taking mirroring in to account) we had around 3TB of space.

Now, it’s a well known fact with maintenance contracts that the cost goes up over time (parts get more scarce and more costly). So we did the sums on the cost we were paying for the old arrays and realised that over a sensible lifetime period it was cheaper to replace them. So we got a pair of Sun 2540 arrays with a 12TB capacity each.

Since our data is absolutely precious we mirror these arrays and use RAID 6. This gives us just under 10 TB of usable space, which is a fair amount more than we started with.

The next stage was bring this online. Because we use Veritas Volume Manager and the Veritas File System we were able to do this almost transparently. The new arrays were provisioned and added to the relevant diskgroups. The volumes were then mirrored on to them and then the filesystems expanded. Finally the old arrays were disconnected. All of this was done without any downtime or interruption to our users or services.

I said almost transparently though. It seems it’s not possible to change the VCS coordinator disks without taking the diskgroups offline and back online (this might be improved in VCS 5). So I rebooted the whole cluster last weekend and it was all finished.

The problem with all this clever technology? Nobody knows we’ve done it. After weeks of work we grew the filesystems just before they completely filled and without any noticable downtime. We’d probably get more gratitude if we’d let it fill up first 😉

Share

Getting the indexes right for OpenLDAP when using NSS

I recently deployed a Linux system which used the libnss-ldap module to get its passwd and group information. This all worked fine except group lookups (in particular when logging in) which were extremely slow. We have about 600 groups in our directory, which isn’t massive, but is more than the average system.

Clearly this wasn’t right. Initially I tried nscd, which helped, but only after it had cached the data. Then I realised it was probably the indexes in OpenLDAP. Googling didn’t turn up much of use (hence this post), but I did find this page on the OpenLDAP site.

This fairly quickly pointed me at the problem; I was missing indexes on memberUid and uniqueMember. Adding these fixed the problem completely.

So here’s the indexes I’ve ended up with:

index   objectClass     eq
index   cn,uid          eq
index   uidNumber       eq
index   gidNumber       eq
index   memberUid       eq
index   uniqueMember    eq
index   entryCSN        eq
index   entryUUID       eq

(the last two are for replication)

I’m actually quite surprised how much the indexes matter. It makes a huge difference, even on a small setup. So if you’re setting up a directory take the time to read the Tuning section of OpenLDAP Admin Guide first.

Share