PostgreSQL Upgrade Part 2

Well, the PostgreSQL upgrade was a snap, sorta. I needed to do a full dump and restore as this was a major version change – no surprises there. What pissed me off though, is that when using the binary data type for dump files when using pg_dump (“-T c”) the resulting backup file is of no use for remote workers who aren’t at the actual console.

Let me expand on this;

This type of backup file is advertised as “more convenient” and offers more options for restore time selective data restores, data re-ording, index tricks and the like. However no matter WHAT I did, it reported and sent a copy of the current pg_restore process and all the data being restored to standard output too!! This means that basically, I was going to have the same full text of 20GB worth of database data shoved down my SSH session!

Yes – this makes the whole affair much slower!

Luckily, being the lateral thinking kind of dude that I am, I never put all my eggs in one basket. That is just a recipe for data omelette.

I also had some plain text file dumps made from pg_dump. So I went into the the PostgreSQL template1 sessions and then used the “\i” command to import my big .SQL file. Perfect! Only important updates sent to std output and not the whole damn enchilada.

Disk restore is still going on. At a max through put of 20MByte a second with a 1000BaseT network, that is at best 1.2G a minute, 72GB an hour, so for 250GB approximately 3.5 hours. However I was still doing a raid array background initialisation, so even after setting the rebuild rate to 10%, the system still needs a lot of time to move the files back – I didn’t bother to do the maths, because 3.5 hours or 10 hours it would all breach my maintenance window.

Because my advertised downtime window to clients was rapidly approaching, I had no choice but to continue to allow the copy to proceed, but redirect the main cluster to access the data store via NFS over the network and bring services back online. I will then be able to do a RSYNC later in less than 1 hour to bring the haphazardly copied set on the main cluster in line with the now used and modified data store on my hot standby server.