The obvious overhead is RAID-DP and hot spare drives. Easily calculated. 1 HS per 30 drives of each size. DP is 2 drives per plex, so that's 6 wasted drives out of the 28 in two shelves, leaving 22 * 266GB drives usable = 5.7TB.
I'd heard that space is reserved for OS and bad-block overhead (about 10%) so that brings us down to 5.2TB usable.
Well, the web interface shows the aggregate as 4.66TB. So that's 600GB I haven't accounted for. But still, 4.66 TB is a good amount of space.
From the aggregate, we create a flexvol (note that this places 20% by default as inaccessible snap reserve space). On the flexvol, we create LUNs and present them to our servers. And here's where the space consumption is nasty:
By default, if you create a 1TB lun, OnTAP reserves 1TB of disk blocks in the volume. That's nice, and exactly what I'd expect. Although in practice, we use thin provisioning (lun create -o noreserve) for most of our LUNs
What I didn't expect going in was that the first time you create a snapshot, OnTAP would reserve ANOTHER 1TB for that LUN. And interestingly enough, that 1TB is never touched until there's no other space in the volume.
Ok, That ensures that even if you overwrite the ENTIRE lun after you take a snapshot. But it reduces the usable size of LUN-allocation to 2.33TB. And if you have multiple snapshots, those don't seem to go into the snap reserve, but rather are in addition to the 2*LUNsize that is already allocated.
So out of a raw disk capacity of (28*266) 7.2 TB (which is quoted as 28*300GB disks = 8.2TB) we get just over 2TB of space that can be used for holding actual system data.
Now, there are non-default settings that can change that, but they're only available at the CLI, not the web interface:
# snap reserve
# vol options
It is not entirely clear what happens to a LUN when its delta becomes larger than the fractional_reserve. Some documentation says it may take the LUN offline, but I would hope that only would happen if there's no remaining space in the volume (like what happens with snapshot overflow in traditional NAS usages). But it's not clear.
As far as I can tell, the current best practice is to set the snap reserve to the amount of change you expect in the volume, and set the fractional_reserve to the amount of change you expect in the LUN. And to set up either volume auto-grow and/or snapshot auto-delete to make sure you have free space when things get full.
On the gripping hand, the default options make sure that you have to buy a lot of disks to get the storage you need.