I hate trying to transition work to other people. I'm at the hopital right now helping SWMBO have our 2nd baby. So I'll be off for a while.
So I'm leaving unfinished several projects... the SAP upgrade sandbox systems, the BEA monitoring project, the Oracle installation & monitoring project, the whole EMC upgrade, the cluster implementation, as well as supporting the treasury project, the hyperion upgrade, the webfocus upgrade... not to mention the usual stuff. Much of it is in the critical path for our big SAP upgrade (4.5 to 6.0) in February.
And I guess I'm just not comfortable that I can successfully hand these projects over to the rest of my team.
Previously, I have interpreted this as a lack of communication on my part-- I haven't taken the time over the past 2 months (not like this wasn't a planned leave) to make sure that the rest of the team has the knowledge to keep these projects moving. Now, I'm not so sure that I could have done anything differently.
The members of the team that are skilled to take up any of these projects are vastly overcommitted (not all of these projects are just mine -- I just advise and consult on some of them) and I don't think I can help the remaining team learn what they would need to learn in order to make meaningful contributions to these projects (for example, they're windows administrators, and this is a solaris problem... it doesn't help if I basically use them as a speech-to-commandline interpreter)
Trouble is, I'm the most technically-skilled unix guy on the team, so I get in the critical path of so many projects. But am I realistically supposed to be able to transfer knowledge about ongoing problems where I'm also new to them?
Oh well, this post took a long time to come out, and lots of stuff has happened since then. The question still remains, though: How am I supposed to get everything done, including training a backup, when the whole team (me and all potential backups) are overcommitted?
--Joe
2006/11/21
2006/11/16
Fun with Filesystems
I think there's a race condition in Solaris... we had a filesystem get full with Oracle archivelogs, so I removed them, then checked to see what effect that had:
A moment later, it was happy:
This is not the first time I've noticed some wierdness with removing data on S10. Last time, I wiped out a copy of our big oracle database, (rm -rf sapdata*/*) which only took a few seconds, but to unmount the filesystem took over 8 hours.
--Joe
# rm D*_60[012345]?_*.dbf
# df -h .
Filesystem size used avail capacity Mounted on
/oracle/D01/saparch 5.9G 16384E 6.4G 301049486643838% /oracle/D01/saparch
A moment later, it was happy:
# df -h .
Filesystem size used avail capacity Mounted on
/oracle/D01/saparch 5.9G 257M 5.6G 5% /oracle/D01/saparch
This is not the first time I've noticed some wierdness with removing data on S10. Last time, I wiped out a copy of our big oracle database, (rm -rf sapdata*/*) which only took a few seconds, but to unmount the filesystem took over 8 hours.
--Joe
2006/10/17
Write something
It's been over a month since I last posted. It's not like I haven't been dealing with lots of enterprise-SA type material, just that I've been too busy to even breathe, much less distill my thoughts into something for this site. But since I'm sick right now, I sorta have a little bit of time on my hands...
Some of the recent topics that are worth discussing (probably in their own posts, or several posts)...
--Joe
Some of the recent topics that are worth discussing (probably in their own posts, or several posts)...
- Thoughts from the monitoring meeting (discussions about what we need for enterprise monitoring, but not all related to monitoring): false buy vs. build dichotomy; fundamental architectural difference between BB-style and SNMP trap... (no explicit "OK" status) ; Industry combination of Monitoring tools with Management tools; the myth of Agentless monitoring; SNMP support on Windows (SNMP Informant)
- An Infrastructures.org mailing list post Message-ID: <20060818174228.B26037@so.lanier.com>
- The usefulness of professional services and consultancy in enterprise application deployment: experiences with CA, EMC, and Hyperion
- Why the hell can't I keep my desk clean?
- I miss going to conferences: VMworld is on now, LISA is in December. I'm expecting a new baby about halfway between, and there's no way I can go out of town for a week.
- I hate being sick. Daytime TV sucks even with satellite and a DVR. If I'd known I was going to be sick this long, I should have joined NetFlix.
--Joe
2006/08/25
The Ultimate P2V
There's been a lot of talk about the "Blue Pill" trick where a hypothetical virus would use the new x86 virtualization features (VT or pacifica) to move a running OS under a hypervisor (where the virus would run undetectably) It would be very interesting to extend this into a positive technology...
Imagine a program that uses Blue Pill to move the OS under a hypervisor. That's fine, but the OS is still coupled to the physical devices (network cards, disks, etc). Now have the hypervisor generate a virtual (hotplug) PCI bus and attach it to the running OS. And have it hotplug a vmnic and an emulated scsi controller. The OS notices the new redundant paths to the disks (standard multipathing software) and fails over all the network connections onto the virtual card. Then the hypervisor virtually unplugs the real PCI bus, and we're left with a completely virtualized (i.e. VMotion-able) machine. Without a downtime.
That would be really cool.
This would require:
Imagine a program that uses Blue Pill to move the OS under a hypervisor. That's fine, but the OS is still coupled to the physical devices (network cards, disks, etc). Now have the hypervisor generate a virtual (hotplug) PCI bus and attach it to the running OS. And have it hotplug a vmnic and an emulated scsi controller. The OS notices the new redundant paths to the disks (standard multipathing software) and fails over all the network connections onto the virtual card. Then the hypervisor virtually unplugs the real PCI bus, and we're left with a completely virtualized (i.e. VMotion-able) machine. Without a downtime.
That would be really cool.
This would require:
- A bluepill-compatible hypervisor that can create virtual hotplug PCI buses, and that can transport running VMs across physical machines
- An OS that supports PCI hotplug, dynamic disk multipathing, and transparent network failover
- All the disks on the physical system being on a SAN or otherwise multihosted
2006/07/18
DamnDamnDamnDamn
The hard drive in my work laptop is in the process of dying. That is to say, it has died (bluescreen: kernel inpage error) but has occasionally spun up enough to boot Windows.
Just long enough for the backup software to load and start a backup, not long enough for the backup to finish.
On the bright side, Support has sent me a new drive, and it's an 80GB: a 20GB upgrade from what I had. So I should have enough space now for some of the virtual machines I've been meaning to create.
Unfortunately, I still haven't finished installing my software on the new image (so far going on 4 hours of work). The only reason I have email is because OWA actually works through Firefox on Linux. Whoda thunk?
--Joe
Just long enough for the backup software to load and start a backup, not long enough for the backup to finish.
On the bright side, Support has sent me a new drive, and it's an 80GB: a 20GB upgrade from what I had. So I should have enough space now for some of the virtual machines I've been meaning to create.
Unfortunately, I still haven't finished installing my software on the new image (so far going on 4 hours of work). The only reason I have email is because OWA actually works through Firefox on Linux. Whoda thunk?
--Joe
Subscribe to:
Posts (Atom)