Tiger Server Upgrade

      Comments Off on Tiger Server Upgrade

Now that I managed to upgrade the server hosting the MARS system from 10.3.9 to 10.4 I thought I’d share some notes on the experience…not because they’ll interest very many readers but because I may want to refer back to it this time next year (when memory of how it was done has faded). There may also be one or two intrepid souls trying to google “DSpace” and “MacOS” and this could prove useful…

The upgrade in place failed miserably…2/3 of the way in and the process bombed–leaving me with an unbootable drive–one that still contained the production DSpace installation, production Postgres database and the mac-os specific version of PostgreSQL that was installed when the machine was new…but a weird hybrid of 10.4 and 10.3.9 for an operating system—stuck in an endless loop of rebooting then asking for disk #2 again.

I took that drive out of slot 1 on the XServe and moved it to slot 2. The disk that was in slot 2 had a bootable (blessed) image of the boot drive on it (I had “cloned” it from the production disk about 3 weeks ago, using CarbonCopyCloner) so I booted from that…then the now semi-upgraded disk mounted as an “extra” drive in slot 2–which meant the data was intact (since it wasn’t stored in any directory that Mac OS X planned to upgrade).

I tarred up the /usr/local/pgsql/data directory from drive that failed updating and extracted it into the /usr/local/pgsql/data directory on the (cloned) boot drive. Restarted postgres and the original data was now up & running. I did a pg_dump of that data and made a couple of copies…both done with the -C create database switch set (enables you to create the database from the psql < xxxx.dmp command). I next shut the server down, removed the two drives and inserted a blank 2nd drive from the other XServe in the office. Did a fresh install of Mac OS X tiger server on that drive...naming the machine with the original server's name (U2) and IP address and admin user name/password combination and so on. Created the same users giving them the same user ID number (important for moving files from one machine to another and perserving proper permissions). Installed XCode (developer) tools on the newly installed server disk and then pulled down the latest 7.x source version of postgres SQL. Compiled it. ./configure --enable-multibyte --enable-UNICODE --with-java make make install voila! Shut server down and reinstalled the original (failed upgrade) disk into slot 2. Rebooted off newly installed Tiger OS on disk 1. Moved the tarball of the original Postgres data (/usr/local/pgsql/data/*) over from the 2nd drive to the newly created /usr/local/pgsql/data directory on the new disk. Moved the DSpace source directory tree off the old machine to the new and did a quick: ant -Dconfig=/dspace/cfg update which rebuilt the *.war files. Moved them to /Library/JBoss/3.2/deploy started Appserver from the Server Admin program... Production DSpace now running under a clean install of 10.4 server. Luckily the assetstore is kept on the XServe RAID machine which is hardware raid and shows up ready for action on any machine I plug the fibre-channel cables into so the actual "bitstreams" of the DSpace installation were never in jeopardy...of course, without a functioning Postgres database linking the metadata to the indecipherable directory structure & filenames given the bitstreams it would be pretty useless. A nice project would be a SQL command that ran through the Postgres database and built a list of objects and what the bitstream for each object is called in the assetstore...that could be printed or saved outside the postgres database so if you absolutely had to find a bitstream after a catastrophic failure of the metadata database, you could. Will work on that when I next remember it...