User Tools

Site Tools



This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
system:vicki_debian_lenny_to_squeeze [2012-03-13T03:56:54+0000]
michael_paoli outlined fair bit of basic upgrade stuff for sflug (mostly referring to other documentation, etc.)
system:vicki_debian_lenny_to_squeeze [2013-02-17T20:00:48+0000] (current)
michael_paoli typo/wordo fix
Line 543: Line 543:
 </​file>​ </​file>​
 ===== vicki host install/"​upgrade"​ procedure outline + select details: ===== ===== vicki host install/"​upgrade"​ procedure outline + select details: =====
 <​file>​ <​file>​
Line 645: Line 644:
       o /boot       o /boot
       o vicki-boot       o vicki-boot
-    o Finish partitioning ​nad write changes to disk+    o Finish partitioning ​and write changes to disk
     (continue without swap)     (continue without swap)
 o Install base system o Install base system
Line 846: Line 845:
 ===== sflug VM upgrade to Debian GNU/Linux 6.0.4 ===== ===== sflug VM upgrade to Debian GNU/Linux 6.0.4 =====
   * sflug VM log bits made available here: [[http://​​log.txt]]   * sflug VM log bits made available here: [[http://​​log.txt]]
-  * upgrade to be done [[http://​​pipermail/​sf-lug/​2012q1/​009242.html|soon]] +  * upgrade to be done [[http://​​pipermail/​sf-lug/​2012q1/​009242.html|soon]] ... [[http://​​pipermail/​sf-lug/​2012q1/​009246.html|"​done"​]] 
-  * will be upgrading, mostly per: +  * will be upgrading/was upgraded, mostly per: 
-    * incorporating "​lessons learned"​ / glitches/​gottchas found and discovered from earlier similar upgrades (namely balug qemu-kvm VM guest on same host): +    ​* **backup(s)** - since [www.] is (generally) "​down"​ when sflug is down, we'll mostly try to minimize the host's downtime. ​ So, for backup, backing up "​everything"​ is good,  but as this is a VM, and its storage on the hosting system is under LVM, we have some nice options for fast live backups - backups from which, state is at least "​recoverable",​ namely: 
-      * from earlier similar upgrade done on balug qemu-kvm VM guest on same host, encountered problem with reboot - namely upgrade procedure produced a flawed /​boot/​grub/​menu.lst file - so we'll carefully inspect (and adjust, as necessary/​appropriate) that file prior to performing that reboot step.  From earlier with balug qemu-kvm VM guest:+      * from host (vicki), sflug'​s storage is an LVM LV, 
 +      * we can take an LV snapshot of that LV, create backup copy of snapshot, then release the snapshot, 
 +      * though that doesn'​t gives us fully consistent filesystem images (as the guest filesystem(s) are still live, mounted rw, and active at that time, 
 +      * it does give us fully recoverable point-in-time capture of the guest'​s disk image (as if one instantly removed power from the guest and then took a snapshot of its hard drive data), 
 +      * note that otherwise taking a copy of live filesystem (e.g. via dd) is NOT guaranteed to be recoverable,​ as data continues to change as the filesystem is being read, thus such an image would not be guaranteed to be recoverable,​ 
 +      * note also other methods could be used to ensure recoverable copy, e.g. by creating a mirror, "​breaking"​ (separating off) that mirror copy, and then using that (as that, again, provides a point-in-time snapshot of the entire filesystem or whatever data is mirrored),​ 
 +      * note also we need to pay due attention/​care regarding UUIDs and such to avoid problems, as this LVM snapshot method will introduce some duplication of UUIDs, but note, however, in our scenario that is //mostly// a non-issue, as: 
 +        * from the host there are separate and unique LVs, and for the most part host wouldn'​t go poking within to look at, e.g. partitions within or other constructs within, thus host normally wouldn'​t see any of those further lower level UUIDs within, 
 +        * we won't present both "​original"​ and "​copy"​ to guest (sflug) or any other single VM at the same time - if we did, that's where problems would occur, as host would see two hard drives, with identical UUIDs, and thus problems would be likely to occur, 
 +        * also, these backups are "​temporary"​ and for convenience - "just in case" upgrade goes horribly wrong or something like that - gives us quick convenient way to get back to "​precisely"​ where we were before we started the upgrade (specifically to the point of the moment the LV snapshot was taken), we probably won't need to use this backup at all, and will dispose of it (and its UUIDs - wipe the data), fairly shortly after we're confident all has gone quite well with the upgrade and we've no interest in potentially reverting to the pre-upgrade snapshot backup, 
 +        * note also this is separate/​independent of our more (semi-?​)regular backups, which tend to be filesystem based and avoid most all the UUID issues (and also tend to generally be much more space efficient) - in this particular pre-upgrade backup scenario, we're going for recoverability and speed and uptime over space efficiency of stored (or transmitted) backup. 
 +      * doing the backup - here's part of what doing the "​proposed"​ LV snapshot based backup looks like, in this case I do a "​real"​ pre-run (plus some additional checks) - when we're much closer to our upgrade time, will do another nice fresh backup (except some of that pre-work will already be done, so we can skip repeating some bits - we can also skip some of the test/​verification bits done here for demonstration purposes) 
 +//I show my comments on lines starting with // 
 +//I don't show all output of all commands (mostly reduced for brevity/​clarity) 
 +//from sflug (guest), we have relatively simplistic filesystem and swap setup, 
 +//one "​physical"​ (virtual from host, but effectively physical to guest) hard drive 
 +//with partition for root (/) filesystem, and swap 
 +# sfdisk -uS -l 
 +Disk /dev/sda: 1435 cylinders, 255 heads, 63 sectors/​track 
 +Units = sectors of 512 bytes, counting from 0 
 +   ​Device Boot    Start       ​End ​  #​sectors ​ Id  System 
 +/​dev/​sda1 ​           63  20980889 ​  ​20980827 ​ 83  Linux 
 +/​dev/​sda2 ​     20980890 ​ 22041179 ​   1060290 ​ 82  Linux swap / Solaris 
 +/​dev/​sda3 ​            ​0 ​        ​- ​         0   ​0 ​ Empty 
 +/​dev/​sda4 ​            ​0 ​        ​- ​         0   ​0 ​ Empty 
 +//and to confirm a bit - ignoring virtual filesystems,​ but our other filesystem(s) and swap: 
 +# mount -t notmpfs,​noproc,​nodevpts,​nosysfs,​nousbfs 
 +/dev/sda1 on / type ext3 (rw,​errors=remount-ro) 
 +# awk '​{if($2=="/"​)print;​};'​ /​etc/​fstab 
 +UUID=792a4837-91bc-4b36-bce7-a5a4208845bf / ext3 errors=remount-ro 0 1 #/​dev/​sda1 
 +# blkid /dev/sda1 
 +/dev/sda1: UUID="​792a4837-91bc-4b36-bce7-a5a4208845bf"​ SEC_TYPE="​ext2"​ TYPE="​ext3"​ 
 +# cat /​proc/​swaps 
 +Filename ​                               Type            Size    Used    Priority 
 +/​dev/​sda2 ​                              ​partition ​      ​530136 ​ 0       -1 
 +# awk '​{if($3=="​swap"​)print;​};'​ /​etc/​fstab 
 +UUID=e326206f-55a6-4fbb-8c81-62f8dfb30753 none swap sw 0 0 #/dev/sda2 LABEL=swap 
 +# blkid /dev/sda2 
 +/dev/sda2: TYPE="​swap"​ LABEL="​swap"​ UUID="​e326206f-55a6-4fbb-8c81-62f8dfb30753"​ 
 +//so, quite simple there and no surprises. ​ And, what does that look like from the host? 
 +# hostname 
 +# virsh list --all 
 + Id Name                 ​State 
 +  2 sflug                running 
 + 26 balug                running 
 +# virsh dumpxml sflug | fgrep '<​source dev='​ 
 +      <source dev='/​dev/​vg-sflug/​sda'/>​ 
 +//so, the storage for the sflug VM is on /​dev/​vg-sflug/​sda in VG /​dev/​vg-sflug,​ 
 +//how much space used and free (and PE size) for that VG? 
 +# vgdisplay /​dev/​vg-sflug 
 +  PE Size               4.00 MiB 
 +  Total PE              6904 
 +  Alloc PE / Size       2941 / 11.49 GiB 
 +  Free  PE / Size       3963 / 15.48 GiB 
 +//so, we can see that a fair bit less than half of the VG is used, 
 +//thus we know we've got ample room to complete our backup, 
 +//and also for our snapshot LV to be present; 
 +//snapshot LV doesn'​t have to consume same space as LV it's a snapshot of, 
 +//as it essentially only tracks changed data, 
 +//and since we'll do this backup relatively quickly, 
 +//and the sflug host won't be changing lots of data nor quickly at that time anyway, 
 +//we don't need much space for the snapshot. 
 +//So, let's do a good test run of our procedure:​ 
 +//we peek at LV names in the VG, to avoid any accidental name collision:​ 
 +# ls /​dev/​vg-sflug 
 +sda  sflug-disk-BACKUP ​ sflug-disk-BACKUP_2010-04-04 
 +//we examine size of our source LV: 
 +# lvdisplay /​dev/​vg-sflug/​sda 
 +  Current LE             ​2816 
 +//we create our target LV, 
 +//​we'​ll use -lenny suffix, 
 +//as that's codename of the "​old"​ Debian GNU/Linux 5.0.10 release we're upgrading from, 
 +//and less likely to cause confusion (e.g. backup, which backup, when, of what?) 
 +# lvcreate -l 2816 -n sda-lenny vg-sflug 
 +  Logical volume "​sda-lenny"​ created 
 +//we check on free PEs in our VG - we'll use them all for our snapshot (probably excessive),​ 
 +//but we only need them for a quite short while anyway, 
 +//so not an issue to be a bit excessive here for a quite short while anyway. 
 +//we reexamine our free PEs in the VG: 
 +# vgdisplay /​dev/​vg-sflug 
 +  Free  PE / Size       1147 / 4.48 GiB 
 +//we use them all for our snapshot LV: 
 +# lvcreate -l 1147 -n sda-snapshot -s /​dev/​vg-sflug/​sda 
 +  Logical volume "​sda-snapshot"​ created 
 +//I include use of time below, just to see how long it takes (dd also happens to provide similar information) 
 +//and also, especially when operating as superuser (root), we want to be sure our commands are correct, 
 +//Unix, and mostly likewise Linux, tends to presume one knows what one is doing, 
 +//and probably doubly (or more) so for superuser,​ 
 +//so carefully triple-check the command(s) before viciously striking the <​RETURN>​ key 
 +//(says me ;-) - has saved my butt on more than one occasion) 
 +//here we want to be particularly careful with where we're writing our data to, 
 +//what we remove, 
 +//and also that we in fact pick our correct data source 
 +# time dd if=/​dev/​vg-sflug/​sda-snapshot of=/​dev/​vg-sflug/​sda-lenny bs=4194304 
 +2816+0 records in 
 +2816+0 records out 
 +11811160064 bytes (12 GB) copied, 424.946 s, 27.8 MB/s 
 +real    7m4.957s 
 +user    0m0.016s 
 +sys     ​0m35.142s 
 +//a bit over 7 minutes - from a point-in-time snapshot, recoverable,​ 
 +//had we done that instead with source of the live rw mounted sda (well, containing sda1), 
 +//what we'd have had instead in our target data would have been an inconsistent mess with no guarantees of recoverability 
 +//and out of curiosity, just before removing it, 
 +//I peek at our snapshot to see how much space it has actually used: 
 +# lvdisplay /​dev/​vg-sflug/​sda-snapshot 
 +  Current LE             ​2816 
 +  COW-table LE           ​1147 
 +  Allocated to snapshot ​ 0.16% 
 +//as we can see, almost none (0.16%) 
 +//and removing our snapshot: 
 +# lvremove -f /​dev/​vg-sflug/​sda-snapshot 
 +  Logical volume "​sda-snapshot"​ successfully removed 
 +//removing the snapshot is important - if it's left around ("too long") and runs out of room, I/O on the LVs stops 
 +//(that can then be corrected by extending the snapshot LV, or removing it) 
 +//pretty much the same LV snapshot techniques are often used for live database (DB) backups, e.g.: 
 +//put the relevant database files/​devices in "​backup mode",​ 
 +//(done within the DB, and in many cases involves more than one which must be done together as a set), 
 +//snapshot them (as set), take database (or those files/​devices) out of backup mode, start backing up the snapshots,​ 
 +//proceed to next set, etc., 
 +//and also release each snapshot after it (or its set) has completed being backed up. 
 +//so, as exercise, we'll check the recoverability of our backup, 
 +//since it's just one journaled filesystem (and swap, that we don't care about the swap data), 
 +//fairly simple to test (we won't do this test when we do our backup "for real" - we know it works, 
 +//​we'​re just going to demonstrate it here. 
 +//And to note first, we're going to bend our UUID "​rule"​ slightly, 
 +//as here we're going to give host peek into the guest'​s UUIDs - but that's not an issue in this particular case, 
 +//as we only let it see one instance of this UUID, 
 +//as the other duplicate UUIDs are within the partitions of /​dev/​vg-sflug/​sda,​ 
 +//which we won't be having the host access (certainly at least not at the same time!) 
 +//from our partition information earlier from the guest (we could also examine it from the host), 
 +//we know how to construct a loop device to access just the first partition within our LV copy: 
 +# losetup -o `expr 512 '​*'​ 63` --sizelimit `expr 512 '​*'​ 20980827` -f --show /​dev/​vg-sflug/​sda-lenny 
 +//does it match what we expect? 
 +# blkid /​dev/​loop0 
 +/dev/loop0: UUID="​792a4837-91bc-4b36-bce7-a5a4208845bf"​ TYPE="​ext3"​ 
 +//yes ... journaled filesystem, so, manner in which we copied it should be quite recoverable,​ 
 +//but rather than just try to mount it (which may replay journal and do relatively light checks), 
 +//​let'​s do heavier set of checks, and see what, if anything it complains about ... complaints should be 
 +//​relatively negligible and few (mostly about state not being set to clean, and perhaps a bit of journal 
 +//to replay - but that should be about it). 
 +# fsck.ext3 -f /​dev/​loop0 
 +e2fsck 1.41.12 (17-May-2010) 
 +/dev/loop0: recovering journal 
 +Clearing orphaned inode 278547 (uid=0, gid=1, mode=0100700,​ size=452) 
 +Pass 1: Checking inodes, blocks, and sizes 
 +Pass 2: Checking directory structure 
 +Pass 3: Checking directory connectivity 
 +Pass 4: Checking reference counts 
 +Pass 5: Checking group summary information 
 +/dev/loop0: ***** FILE SYSTEM WAS MODIFIED ***** 
 +/dev/loop0: 28306/​1310720 files (2.1% non-contiguous),​ 294444/​2621440 blocks 
 +//and as we'd expect, quite "​clean"​ (recoverable),​ 
 +//some bit(s) like Clearing orphaned inode or similar, may be expected, 
 +//e.g. if snapshot happens, and there are unlinked open files at the time, 
 +//then those files no longer exist in any directory,​ 
 +//but their inode (and storage, if any) hasn't yet been freed, 
 +//while that's normal for Unix/Linux if a PID still has the files open, 
 +//​that'​s no longer relevant after the fact when we're cleaning up a backup of the filesystem,​ 
 +//so that bit if housekeeping on the filesystem is still to be done (which would 
 +//normally happen rather quickly on live mounted filesystem after last PID having the unlinked 
 +//file open, closed the file (explicitly,​ or via other action such as exit)). 
 +//we could do more with that filesystem (e.g. testing, etc.) if we wished at this point, 
 +//but that more than suffices for our testing here, so time to free up that loop device: 
 +# losetup -d /​dev/​loop0 
 +    * **incorporating "​lessons learned"​** / glitches/​gottchas found and discovered from earlier similar upgrades (namely balug qemu-kvm VM guest on same host): 
 +      * from earlier similar upgrade done on balug qemu-kvm VM guest on same host, encountered ​**problem with reboot - namely upgrade procedure produced a flawed /​boot/​grub/​menu.lst file** - so we'​ll ​**carefully inspect (and adjust, as necessary/​appropriate) that file prior to performing that reboot step**.  From earlier with balug qemu-kvm VM guest: 
 <​file>​ <​file>​
 installation "​failed"​ around step 4.4.5 in: installation "​failed"​ around step 4.4.5 in:
Line 858: Line 1047:
 then able to proceed then able to proceed
 </​file>​ </​file>​
-    * relevant Debian documentation:​+    ​* **relevant Debian documentation**:
       * [[http://​​releases/​stable/​i386/​release-notes/​|Release Notes for Debian GNU/Linux 6.0 (squeeze), 32-bit PC]]       * [[http://​​releases/​stable/​i386/​release-notes/​|Release Notes for Debian GNU/Linux 6.0 (squeeze), 32-bit PC]]
-        * especially: [[http://​​releases/​stable/​i386/​release-notes/​ch-upgrading.en.html|Upgrades from Debian 5.0 (lenny)]] +        ​* **especially:​ [[http://​​releases/​stable/​i386/​release-notes/​ch-upgrading.en.html|Upgrades from Debian 5.0 (lenny)]]** 
-      * and also if/​as/​where/​relevant:​ [[http://​​releases/​stable/​i386/​|Debian GNU/Linux Installation Guide]] (installation instructions for the Debian GNU/Linux 6.0 system (codename “squeeze”),​ for the 32-bit PC (“i386”) architecture)+          * will do pre-upgrade steps in advance as feasible, have done the following:​ 
 +            * 4.1.1\\ output of:\\ # dpkg_--get-selections '​*'​\\ saved to:\\ /​root/​upgrades/​5.0_lenny_to_6.0_squeeze/​dpkg_--get-selections\\ from host (vicki), saved sflug VM configuration for reference, "just in case":​\\ # virsh dumpxml sflug > /​root/​upgrades/​5.0_lenny_to_6.0_squeeze/​sflug/​sflug.xml\\ will do fresh snapshot backup of balug VM (virtual) disk before upgrade 
 +            * 4.1.6 checked package splashy not installed 
 +            * 4.2 - should be good on no 3rd party packages, etc., system has been updated to the latest point release of lenny (5.0.10). 
 +            * 4.2.1 checked good: No packages are scheduled to be installed, removed, or upgraded. 
 +            * 4.2.2 good (/​etc/​apt/​preferences not present) 
 +            * 4.2.3:\\ looks good: # dpkg --audit\\ dpkg -l looks quite good, used: # dpkg -l | grep -v '^ii '\\ dpkg --get-selections '​*'​ also quite clean, examined: # dpkg --get-selections '​*'​ | awk '​{print $2;}' | sort | uniq -c\\ also clean: # aptitude search "​~ahold"​ 
 +            * 4.2.4 proposed-updates not in /​etc/​apt/​sources.list 
 +            * 4.2.5 - presumably not an issue (doesn'​t appear to be an issue) 
 +          * **done** (main upgrade proper sequence, and immediately preceding steps): 
 +            * did LV snapshot backup, from host (vicki) (target created earlier; also, this was set up to execute via at(1) at 5:40 P.M. on the day of the upgrade - 20 minutes prior to start of our upgrade maintenance window; and in earlier test runs, completed in under 10 minutes; checked that it completed okay, script and output on viki host under /​root/​upgrades/​5.0_lenny_to_6.0_squeeze/​sflug/​):​ 
 +              * # lvcreate -l 1147 -n sda-snapshot -s /​dev/​vg-sflug/​sda 
 +              * # time dd if=/​dev/​vg-sflug/​sda-snapshot of=/​dev/​vg-sflug/​sda-lenny bs=4194304 
 +              * # lvremove -f /​dev/​vg-sflug/​sda-snapshot 
 +              * and here's what that script that was executed looks like, and its output: 
 +# hostname; pwd -P; cat bkup; echo; cat bkup.out 
 +set -e 
 +umask 077 
 +cd / 
 +at 1740 << \__EOT__ 
 +set -e 
 +umask 077 
 +cd /​root/​upgrades/​5.0_lenny_to_6.0_squeeze/​sflug/​ 
 +exec >> bkup.out 2>&​1 
 +echo start `date -Iseconds` 
 +[ -b /​dev/​vg-sflug/​sda-lenny ] 
 +lvcreate -l 1147 -n sda-snapshot -s /​dev/​vg-sflug/​sda && { 
 +time dd if=/​dev/​vg-sflug/​sda-snapshot of=/​dev/​vg-sflug/​sda-lenny bs=4194304 || 2>&1 echo dd returned $? 
 +lvremove -f /​dev/​vg-sflug/​sda-snapshot 
 +echo end `date -Iseconds` 
 +start 2012-03-14T17:​40:​00-0700 
 +  Logical volume "​sda-snapshot"​ created 
 +2816+0 records in 
 +2816+0 records out 
 +11811160064 bytes (12 GB) copied, 421.079 s, 28.0 MB/s 
 +0.01user 34.83system 7:​01.09elapsed 8%CPU (0avgtext+0avgdata 19248maxresident)k 
 +23068824inputs+23068680outputs (1major+1242minor)pagefaults 0swaps 
 +  Logical volume "​sda-snapshot"​ successfully removed 
 +end 2012-03-14T17:​47:​04-0700 
 +            * present Debian GNU/Linux 6.0.4 "​Squeeze"​ - Official i386 CD Binary-1 20120128-12:​53 (or latest 6.0.x) from host to sflug guest,\\ we do this as a SCSI "​disk"​ (though SCSI CD-ROM would be even more preferred), as we can't hot remove virtual IDE devices from guest (though we can change virtual media ... but sometimes we want to present multiple ISO images concurrently,​ and it's preferable to be able to remove the devices when they'​re no longer needed or wanted), so we present it from host vicki thusly:\\ # virsh attach-disk sflug /​var/​local/​pub/​mirrored/​​debian-cd/​6.0.4/​i386/​iso-cd/​debian-6.0.4-i386-CD-1.iso sdc --mode readonly\\ Note that that sometimes causes a slight booboo in the guest'​s configuration,​ which we'd fix via:\\ # virsh edit sflug\\ namely we'd want to change the driver name from file to qemu in the section for our added disk - otherwise the guest will fail to restart once (virtually) powered off.  In our particular case this time around, virsh appeared to not introduce that issue (it used driver name phy) - probably "good enough",​ so didn't bother to "​correct"/​alter that at time of upgrade; we do however, still need to test full (virutal) power down and subsequent reboot of guest 
 +            * on guest, scan SCSI devices, create mount point (if not present), update /etc/fstab, and mount it: 
 +              * # (for tmp in /​sys/​class/​scsi_host/​host*/​scan;​ do echo '- - -' > "​$tmp";​ done) 
 +              * # (umask 022 && mkdir /​media/​cdrom9) 
 +              * Even though we told the host to present the device to guest as sdc, it showed up on guest as sdb, so we went with that: 
 +              * add to /​etc/​fstab:​\\ /dev/sdb /​media/​cdrom9 udf,iso9660 ro,​nosuid,​nodev 0 0 
 +              * # mount /​media/​cdrom9 
 +            * 4.3 - continue per documentation\\ add new cdrom ISO to /​etc/​apt/​sources.list:​\\ # apt-cdrom -m -d=/​media/​cdrom add\\ and make other /​etc/​apt/​sources.list updates 
 +            * looking now at the above, and checking shell history, had actually intended to use /​media/​cdrom9 for the above apt-cdrom command, but didn't catch that mis-entry at the time ... so, ... apt and friends "of course"​ then wanted to use /​media/​cdrom instead of /​media/​cdrom9 ... at the time, just made symbolic link to cover that, and then continued from there: 
 +            * # ln -s /​media/​cdrom9 /​media/​cdrom 
 +            * ... 
 +            * 4.4.5 **before** first reboot, inspected /​boot/​grub/​menu.lst,​ and corrected/​edited if/as relevant, probably also increase timeout if a "too short" default timeout got placed in there - /​boot/​grub/​menu.lst actually looked fine - unlike the glitch we encountered when upgrading the balug VM, where the new kernel entries were missing their initrd entries. 
 +            * inspected legacy grub, and grub2 configurations - grub looked fine, tweaked grub2 - mostly via /​etc/​default/​grub and mostly modeling after quite similar from the balug VM guest'​s configuration with grub2, tested and legacy grub boot looked fine, chainload of grub2 worked and looked fine, booted via grub2 chainload, used the indicated procedure to then convert to update to grub2 (including MBR) and remove (but not purge) legacy grub (all handled pretty much automagically for us by running the one indicated program that did that for us - we just had to "​tell"​ it (via presented options) where grub2 was to "​install"​ (write its MBR) - the rest what highly automagic; tested reboot with direct grub2 without any "​user"​(/​admin) input being used on console (graphics or terminal) to ensure all came up fine multi-user without any input needed (to ensure unattended (re)boots should generally quite work well and as expected). 
 +            * ... 
 +            * we essentially completed through step 4.6.3, "​skipped"​ (deferred) 4.7 (the part before 4.7.1), and completed step 4.7.1, and then deferred subsequent steps (4.8 and beyond) for later subsequent "​clean-up"​ and the like, having essentially made it through all the particularly critical and important portions of the main sequence of the upgrade proper. 
 +            * we also checked both web server and DNS server both on the host and up and running and operational as expected. 
 +          * **and done(!)** - these bits were completed later, but not later than 2012-03-17:​ 
 +            * went through and completed our various clean-up tasks: 
 +              * reviewed configurations - in the interests of time (and uptime), some of the configurations where we had local modifications,​ we just used our "​old"​ configurations,​ and didn't examine or merge/​integrate our earlier changes with the newer default configurations;​ completed going through those carefully and adjusting and cleaning up as appropriate 
 +              * init / rc stuff - we didn't earlier get the advantages of the newer faster start-up system - some legacy/​dependency bit prevented that from being activated; reviewed that and got that all squared away 
 +              * 4.7 (the part before 4.7.1) 
 +              * steps starting at and beyond step 4.8 
 +              * saved script output from most of the upgrade process - reviewed that for any potentially security sensitive bits, and as there weren'​t any significant security issues with public disclosure, made that data available (at least "for a while"​) at: [[http://​​upgrade-squeeze.tar.bz2]] 
 +              * high-level (human written (with some machine assist) and intended for human consumption) log file for the system is available at:\\ [[http://​​log.txt]] - for this system upgrade bit, look/search for the line that starts with:\\ 2012-03-14\\ (at the time of this writing that's near the end of that file) ... and got that file quite caught up. 
 +            * did full virtual power reboot test (did reboot tests 2012-03-14, but not with full virtual power down and back up again with guest), ensured that guest comes up fine with no need for any console input being provided 
 +      * and (documentation bits - didn't really need to refer to or use this particular document with/for our upgrade) ​also if/​as/​where/​relevant:​ [[http://​​releases/​stable/​i386/​|Debian GNU/Linux Installation Guide]] (installation instructions for the Debian GNU/Linux 6.0 system (codename “squeeze”),​ for the 32-bit PC (“i386”) architecture)
system/vicki_debian_lenny_to_squeeze.1331611014.txt.bz2 · Last modified: 2012-03-13T03:56:54+0000 by michael_paoli