What I've learned today

A couple of useful Java tricks that I can't find consolidated elsewhere on the web:

Locking down Java: 

I have a Java instance that runs just my one application. (for example, /opt/foo/jre/bin/java) This application talks across systems, and can be set to use SSL. I don't want to pay for a "real" certificate-- I'll use our internal CA, besides, I don't really want to trust Verisign or any of the other 300+ CAs that Oracle/IBM have decided they like. (No offense intended, just paranoia about MITM attacks within my LAN)

Let's assume that I have this part working, and that SSL is being served happily on the first server.
I'll start by extracting the certificate chain from that first server:
:| openssl s_client -host firstinstance -port 8443 -prexit \
           -showcerts >firstinstance.crt
(No, that's not an emotionless command emoticon, it's a pipe)

From there, I can manipulate the Java keystore to trust (or not trust) my targets.
keytool -import -alias firstinstance -file firstinstance.crt \
  -keystore paranoid.jks -storepass changeme -noprompt \
  && cp paranoid.jks /opt/foo/jre/lib/security/cacerts

Did it work?:

Well, let's check.  Using the same Java environment as our app:
$ JRE_HOME=/opt/foo/jre
$ JAVA_HOME=/opt/foo/jre
$ /opt/foo/jre/bin/java -cp . SSLPoke firstinstance 8443
Successfully connected
In theory, a restart of the app should have it pick up the new (single-entry) list of trusted certificates. In practice...

WTF file is it loading from?

Hidden somewhere in the depths of the startup script for the app, something gets redefined so that instead of looking in $JAVA_HOME/lib/security/cacerts (which has been nicely cleaned up), it used /opt/foo/lib/security/cacerts. (But of course, I didn't know that until.. Hammer)
# apt-get install -y sysdig
# sysdig proc.name=java and evt.type=open | grep cacerts & systemctl restart foo
#7554308 14:25:48.107476643 3 java (6613) < open fd=22(/opt/foo/lib/security/cacerts) name=/opt/foo/lib/security/cacerts flags=1(O_RDONLY) mode=0 
Ah, there's the file I need to mangle for my application.

You may have noticed I slipped in the SSLPoke command above. That's a fetch of https://confluence.atlassian.com/download/attachments/117455/SSLPoke.java -- Thanks, #Atlassian!


systemd unit file names

TL;DR: custom systemd unit files can not have "-" in their name, or at least mine can't.

I like to have third-party software installed in a version-specific directory.  In particular, I'm trying to get Elasticsearch to live in /opt/elasticsearch-2.4.0/ with a symlink /opt/elasticsearch -> /opt/elasticsearch-2.4.0, especially for service software that I might need to run multiple versions.  I'm using Ubuntu 16.04 (Xenial/LTS) so I'm forced to deal with systemd files that are not delivered OOTTB.  For comparison, I can install the .deb that elastic.co distributes, but that installs itself in /usr/, which means only one version.

Fortunately, the unit file is very simple:

Description=Elasticsearch 2.4.0




I save this as /usr/lib/systemd/system/elasticsearch-2.4.0.service, and... nothing happens.  Install the .deb, check that it has the right unit files (starts service ok).  copy /usr/lib/systemd/system/elasticsearch.service as elasticsearch-2.4.0.service -- This should use the exact same config options, just run with a different service name, and nothing.  Purge the .deb and put my unit file (above) in elasticsearch.service, and it runs.

The closest I can come to documentation of why servicename -dash- something is in the man page:
Some unit names reflect paths existing in the file system namespace.  Example: a device unit dev-sda.device [...]
 So there might be something special about unit names with a dash in them?  Hmm, mv elasticsearch.service elasticsearch_2.4.0.service ; systemctl daemon-reload ; service elasticsearch_2.4.0 start.  By golly, that works.

I guess I'll change the service naming.  (But note, cfengine3-web is apparently a perfectly valid systemd unit name for CFEngine Enterprise)


Small Business Server crash and recovery

I'm the "tech guy" for my parents' small business, and the new point of sale software for them runs on top of MSSQL on Windows Server.

Step 1: Buy a system with Windows Server 2012 on it.  (Check.  $900 from Microcenter)
Step 2: Take it home and set it up so that the POS people can install their software on it.
Step 3: ???
Step 4: Profit.

Actually, I'm stuck in the middle of step 2.  I have to take this opportunity to make it more enterprisey, even though I have very little experience with modern Windows Server technologies... But how hard can it be?  I just want a server that can sit in their basement, run MSSQL and the POS app, plus do other little tasks, like print queueing and that sort of thing... And allow employees and owners to connect in remotely so that they can travel... And maybe help protect the terminals' web browsers from malware... and I need to be able to support it remotely.  And....

So I figure I have to do all this stuff anyway, I'll just go ahead and set it all up: Active Directory, VPN, Internet gateway/proxy, etc.  MSSQL will be the POS guy's problem :)

On the bright side, the server I bought has a Supermicro motherboard that supports KVMoverIP via IPMI.  Which means I can sit with my laptop on the couch, while the server sits in the basement without a keyboard or monitor.  Unfortunately, "in the basement" at the moment means in the middle of a construction zone.  The only flat place to put it (that was close enough to the shelf that has my Internet router) was on top of the washing machine.  Which apparently vibrates enough when it's running to bounce everything off of it.  So the server fell the 5 feet to our new concrete floor.  Crash.

Both hard drives are misbeaving, and there are broken platic bits of the front of the case.

Luckily, I have a spare 1TB hard drive in stock.


Ok, iostat, what's the mountpoint

Fixing^WDiagnosing^WLooking at a performance problem, we're pretty sure it is a disk issue. iostat is helpful (for showing what's generally going on) but I have to keep remembering that /dev/sdk is mounted as oradata3 (etc). So instead, here's a script that remembers it for me:
iostat -xk 5 | sed -e "$(mount | awk '/ext3/{print $1"/"$3}' | awk -F/ '/dev/{printf "s/%-10s/%-10s/\n",$3,$NF}')"
Look at the mounted filesystems. If they're EXT3, grab the first and third entry (dev and mountpoint) and write out "s/sdk /oradata3 /" (spacing is important to keep the iostat columns aligned). Pass that to sed to transform the iostat output. --Joe


Greenplum DCA and my roll-my-own ETL host

I'm trying to get my new Dell server with its 10gigE network cards to talk to the back-end switch of my greenplum DCA.
Other than the fact that Brocade doesn't seem to understand the difference between a support matrix that says "Using non-Brocade cables is not supported" and a software feature that checks to see if the inserted standards-compliant cable was manufactured by Brocade (vs a standards-compliant cable made/sold by Dell) and if not turning off the port. And other than the Dell sales tool not pointing out this incompatibility, I'm in good shape.
Once there's a link at the SFP+ layer, however, the greenplum switches are not set up for ETL work out of the box... And of course, since these back-end switches are not connected to the "real" network, I have to ssh-tunnel to get to the Switch Admin web tools.
The unused ports on the switches are set up as link aggregation members, and so do not work without even more of these cables. So first, I have to take them out of the CEE LAG groups (first disable the port via Port administration). Switch Administration -> CEE -> Link Aggregation, Edit LAG Group 2, and take out Te 0/18.
Then back over to Port Admin, to change the port to L2 Access mode, and we can enable it.
And finally, back over to Switch Administration -> CEE -> VLAN, edit VLAN 199, and add the Te 0/18 interface to the vlan.
And we have packets moving.
Testing with "gpssh -f hostfile ping -c 3 etl1-1" and "gpssh -f hostfile ping -c 3 etl1-2"