2007/03/20

Ask the right question, and you have the answer

I really hate when I'm sending an email to ask a highly-technical question, and in the process of formulating the question find the answer.

We're looking for the Next Big Disk Array to replace the Previous Big Disk Array, which has lately been showing its age in the performance arena. This is the Big Disk Array that we use as a Networker adv_file device, where we write the Big Database backups.

There's lots of people who sell BDAs. I can pretty much characterise the product options as:
  • Proprietary (usually web-based) interface that doesn't integrate with any other management tool (that's another rant to be ranted someday)
  • Proprietary ASIC on a controller board (possibly redundant Active/Active, or Active/Passive)
  • Some number of 1,2, or 4Gb Fibre and/or 1G iSCSI ports
  • Cache memory, usually up to 2GB
  • as many disks as will fit in that number of rack units
And it takes a 3-page PDF to marketspeak that. Anyway, from a performance standpoint, the only two numbers ever referenced are the uplink speed (!look! we have 4Gb fibre) and maximum throughput (which is never explicitly defined).

Max throughput, I generally assume, means "read whatever the optimal block size is out of cache, and imagine that the whole array is that fast" (cf/ peak transfer rates from consumer disk drives). Unless the unit supports expansion units, in which case it's "get as many expansion units as we can install, stripe a single disk group across all of them, and then report the aggregate throughput from that"

Neither is particularly helpful for me to figure out if we can write "database_backup.tar" onto the array fast enough. But I digress.

The question I was trying to ask is:

Where does it make sense to perform I/O reordering, redundancy, and cacheing:
  • On the array's controller card (which is a custom ASIC with 2GB of cache) -or-
  • In the Solaris I/O stack (including ZFS) on a server with 8GB of RAM and knowledge of the application's I/O pattern and years of performance optimization
In addition, this is not an exclusive-or: the Solaris layer is still going to be optimizing its I/O pattern, possibly with wrong assumptions about the performance and parallelism of the LUN. Or even worse, our PBDA couldn't act as a single big LUN, so the solaris layer is queueing 3 I/Os in parallel to what it thinks are 3 differnet disks, but in fact must be serialized by the controller with a long seek in between. This is clearly not optimal.

(Which reminds me... the custom ASIC has virtually no ability to actually measure or tune any performance of the system. There is no concept of exposing performance or profiling data, and there's no way to determine that these seeks are really causing the slowness. On the solaris side, OTOH, there' s things like seeksize.d that can help figure out why the fscking thing is so slow)

Just framing the question has taken me from 60/40 in favor of JBOD to about 95% in favor of it.

No comments: