Streaming music around my home

This weekend I thought I’d have a go at setting up SlimServer, an application that streams music to various audio devices. It’s primarily design to work with the SqueezeBox, a hardware device that can wirelessly stream the music, but there are various bits of software that can use it too.

The server itself was a doddle to set up. There is a FreeBSD port of it that does all the work for you. Once installed you just run it then browse to port 9000 on the server to access it. It didn’t take long to index my MP3 collection, and then it was ready to go.

I started out by using Winamp to stream the music. That worked absolutely fine, but I wanted something I could run under FreeBSD. The idea was that I’d find an old piece of hardware that I could run headlessly in the lounge hooked up to our speaker system.

Various applications existed to do the job. The obvious choice is SoftSqueeze, a virtual SqueezeBox application provided in the SlimServer distribution. It’s java based, which makes it mildly more effort to get going on FreeBSD, but works pretty well. It has a headless mode too, which is ideal for what I want.

Next up there’s slimp3slave, which is a small C application that does the same job as SoftSqueeze. It uses an external application such as mpg123 or madplay to actually play the audio, so it’s a fairly small app. Whilst it doesn’t appear to have any problems, I didn’t have much success with the players. mpg123 got confused by the stream, and madplay kept skipping the beginnings of tracks when I hit next on the server. This could be a problem with slimp3slave – I’ll need to investigate.

Unfortunately SoftSqueeze isn’t faultless either. Whilst it plays fine, if you leave it idle for a long period of time something goes wrong and it refuses to play. I need to debug this further – it’s likely a FreeBSD related issue, since I know it works for other people on other platforms.

To finish the installation off I installed the MusicIP listener tool which can generate playlists based on any track you give it. When first started it does a scan of all your collection, which takes forever, and builds a database of information about tracks. It then uses this to match similar tracks together. It’s working surprisingly well so far.

The only problem with the MusicIP tool is that it’s a linux binary. This meant activating linux emulation on the server and installing the base linux port. To use the client application (not actually needed, though, since you can do everything through the server) you need java too – a linux one. I only had success with the blackdown 1.4 version.

This lot is controlled via a web browser. This is fine if you’re sitting at a PC and streaming to an application on your machine. But what about the headless machine? Fortunately I have an iPAQ with wireless, and it does the job of a remote control perfectly.

Longer term, if I can’t solve the problems with SoftSqueeze or slimp3slave I’ll consider buying a SqueezeBox. They’re expensive though; £170 for a wired version and £210 for a wireless one. Until I can justify the expense (ie. I’d acually use it) I won’t be forking out for one, though.


Urg, summer

Looks like summer is finally here, but for me that isn’t a good thing.

The main reason I dislike the summer is hayfever. I get it quite badly which means I spend most of the summer bunged up, itchy, and miserable. Add to that the heat and dryness we get in Kent and I’m not a happy chappy.

Thankfully I have our portable aircon unit running at the moment which has reduced the temperature of this room to a relatively cool 24 degrees; the rest of the house is much warmer.

Roll on September…


The end of an era, or two

This week we’ve finally seen the end of some things I’ve been trying to sort out for some time now.

  • The old storage arrays (Sun T3s and A1000s) are finally gone. The T3 arrays in particular have caused us endless grief over the past few years, so I’m more than happy to see them go. It also marks the end of a year long project to centralise our filestore on our resilient cluster. No more losing access to our files when one machine goes down 🙂
  • Our last Solaris 8 machine has been decommissioned. We’d stopped supporting it a while ago, but this finally puts the nail in the coffin. More importantly it means I can focus on moving towards Solaris 10, which I hadn’t done until now because I didn’t want to be running 3 different versions of Solaris at once!
  • We’ve finally removed the last non-rackmountable machine from the racks. Actually, it wasn’t even in our racks, so it means we’re now entirely self-contained within our own area. This is something I’ve been trying to do for many years.

So I’m now spending some time looking at Solaris 10 and trying to see how we can integrate the new “features” in to our existing systems. The main problem areas seem to be the service management stuff. I’ll undoubtedly post more about that in the future.


I don’t have a good history with FreeBSD RAID…

I’ve never got on well with software RAID systems on FreeBSD. I’ve tried gvinum (previously I used vinum), gmirror, and ataraid, all with varying degrees of success. The latest machine I built is using gmirror, and so far I’m happy.

However, over the past few days I’ve been having problems with a system I built a couple of years ago. It originally used vinum on FreeBSD 5.2.1, but I recently upgraded it to 5.5 and switched to gvinum. A week or so ago I noticed that the second disk in the mirror was marked stale – I guessed it was an artifact of the upgrade to 5.5. So on Tuesday I decided to resync it.

It went fine to start with, until syncing one partition produced a disk read error. This marked the whole original disk as bad, and I’d only half synced to the second disk. Thinking back I knew this disk had an error on, and I’d fully intended to replace it. Shame I didn’t do it at the time. Next I rebooted the machine to recover the disk from dead to stale, so I could force it back online. This is where the problems started.

GEOM_VINUM: subdisk swap.p1.s0 state change: down -> stale
GEOM_VINUM: subdisk root.p1.s0 state change: down -> stale
GEOM_VINUM: subdisk var.p1.s0 state change: down -> stale
GEOM_VINUM: subdisk usr.p1.s0 state change: down -> stale

That’s what welcomed me during bootup. Not too bad I hear you say? Well, that’s all I saw after that – it didn’t boot any further. I tried various things such as unloading the geom_vinum module, booting single user, booting the other disk, pulling one disk, but nothing worked.

In desperation I booted an older kernel. It worked! Well, when I say worked, I mean it booted past this point and asked me for a root partition – but at least I could work with that. It wasn’t immediately obvious why it had worked; my theory is that it wasn’t the fact it was an older kernel, but that it was a different kernel version to the modules on the disk, making it refuse to load the geom_vinum module.

So after getting things running again I decided to update to 6.1. I figured help would be more limited when running 5.5, and I could see changes had gone in to gvinum in 6.1. After a few hours this was done, but the result was the same; I booted to single user, typed “gvinum start”, and got the same message. Oddly this time the machine wasn’t entirely dead – I could still reboot it. But maybe this was because I’d launched it manually.

Regardless of the cause of the problem I’m now stuck. I’ve got everything running off one disk fine, but I can’t get the RAID going. The only possibility I can see is redoing the RAID configuration, but to do this I’ll need to blast the existing config off the disks, and I’m nervous about that.

The other option I’m considering is replacing the machine and starting again (it’s getting old now anyway). Maybe this time I’ll go for a hardware RAID solution, though 🙂