> From: "Michael Paoli" <Michael.Paoli(a)cal.berkeley.edu>
> To: BALUG-Talk <balug-talk(a)lists.balug.org>
> Subject: test DNS that returns SERVFAIL? ... ! :-)
> Date: Mon, 13 Apr 2020 03:14:35 -0700
> Ah yes, I'm quite starting to get used to and like/prefer dynamic DNS
> update. Significantly more goof-resistant, and most of the time I don't
> even have to think about the zone serial number. Which reminds me,
> I do still want to add some version "control" (tracking) ... driven via
> cron, so I'll at least have periodic snapshots of changes (since no
> longer using ye olde manual method & manual version control). For
> more recent changes, and fine-grained history of changes, logs cover
> that quite well. But for the longer historical record ... wee bit 'o
> gap presently to fill on that.
And now added. Doesn't catch the "why", and doesn't catch
change-by-change, but automatic daily check-in of any changes,
"good enough" for my(/our?) purposes here, and have that now:
$ cat /etc/cron.d/local-bind-master-auto-rcs
0 10 * * * root exec >>/dev/null 2>&1 && for zone in
e.9.1.0.5.0.f.1.0.7.4.0.1.0.0.2.ip6.arpa berkeleylug.comsf-lug.comsflug.comsf-lug.netsflug.netbalug.orgberkeleylug.orgsf-lug.orgsflug.org; do rndc sync -clean "$zone" && rndc freeze "$zone" && {
rcsdiff digtitalwitness.org || { rcsdiffrc="$?"; [ "$rcsdiffrc" -ne 1
] || { ci -l -d -M -m'checking in change(s)' "$zone"; }; }; rndc thaw
"$zone"; }; done; :
$ hostname
balug-sf-lug-v2.balug.org
$ ls -l /etc/localtime
lrwxrwxrwx 1 root root 27 Apr 19 09:56 /etc/localtime ->
/usr/share/zoneinfo/Etc/UTC
$
Okay, one more time ... hopefully relatively smoothly this time.
Have done fair bit more thorough testing, essentially on
"cloned" virtual machines (while also taking the appropriate
precautions to avoid conflicts with the running production).
Anyway, will restart the upgrade process soon.
If all goes relatively smoothly, should be complete after
a few hours or so.
Services - for the most part will mostly remain up and operational
throughout, but there will definitely be at least some disruptions,
due to some required reboot(s) and stopping and restarting of
services. I'll update once things have completed and things
look good ... or any substantially different results.
This BALUG VM is the host that supports essentially all (except some
additional DNS slaves) services of:
BALUG
and likewise excepting list services, for:
SF-LUG
BerkeleyLUG
references/excerpts:
https://lists.balug.org/pipermail/balug-admin/2020-February/001017.htmlhttps://lists.balug.org/pipermail/balug-admin/2020-February/001018.htmlhttp://linuxmafia.com/pipermail/conspire/2020-April/010521.htmlhttp://linuxmafia.com/pipermail/conspire/2020-April/010525.html
The BALUG.org Virtual Machine (VM)
balug / balug-sf-lug-v2.balug.org
which also hosts most all things, not only BALUG.org,
but also SF-LUG.org (excepting it's lists) and BerkeleyLUG.com,
I'm intending to upgrade the operating system soon.
Expecting some modest outages with some service restarts and reboots,
etc. But expecting for the most part it will remain up.
I did also earlier upgrade "vicki"
https://lists.balug.org/pipermail/balug-admin/2020-February/001016.html
just fine,
and also successfully completed a dry run test upgrade:
created LVM snapshot of the running balug VM,
copied snapshot to LVM volume,
built VM around that volume - and configured to be
highly similar to balug (differing only in Ethernet
MAC address and VM related UUID(s) and name
and networking (different subnet/network/vLAN)),
disabled network link at VM before "powering up" the VM,
reconfigured network so as to not conflict with balug,
disabled services (most especially anything that would attempt
to send email), enabled network link and networking,
went through the full upgrade process - all went well,
no issues of any significance encountered.
Anyway, now I move on to the "real deal" (the actual balug VM).
I'll report if any significant issues come up - otherwise
presume all has gone fine and the upgrade will be completed soon (likely
today).
So, starting on 2020-01-31
I upgraded the "vicki" host (physical: Supermicro PDSMi)
from:
Debian GNU/Linux 9.11 (stretch) x86_64
to:
Debian GNU/Linux 10.2 (buster) x86_64
vicki is the host that's the occasional* host that hosts the
balug Virtual Machine (VM) - that
VM hosts most all things:
BALUG.orgSF-LUG.org (excepting list(s))
BerkeleyLUG.com
(and excepting DNS slaves thereof)
*occasional - vicki mostly hosts the BALUG VM when it's not otherwise
being hosted on its nominal host (which is much quieter and has
significantly more RAM, but alas, is a laptop, and semi-regularly
ventures out or is otherwise unavailable for hosting the balug VM -
hence the VM typically live migrates first to vicki on such occasions,
and then back again when the nominal host is available again). The
(rather loud, and power hungry) vicki hosts spends most of its time
powered off when it's not hosting the balug VM.
And the upgrade ... mostly was done live and went mostly quite smoothly.
Here are the slight bits that took a little more/additional attention:
apt[-get] autoremove - After the upgrade, apt[-get] was suggesting
using autoremove to remove a bunch of "no longer needed" stuff. I
reviewed the list, most of it was fine/routine (e.g. lots of libraries
- which would otherwise remain if that dependent upon them also
remained), ... but some bits of it weren't, e.g. bridge-utils (notably
at least, I sometimes use brctl, and it may also be used by ifup/ifdown
and friends, and/or libvirt). There may have been a small number of
packages other than that, which autoremove would've defaulted to
removing, but which I opted to retain. After suitably adjusting the
status of those (so apt[-get] would show them as explicitly requested
rather than just brought in to satisfy some dependency(/ies),
proceeded fine with the autoremove (with --purge - the relevant
pre-upgrade backups were done, so no need to otherwise explicitly retain
configs for packages being removed).
libvirt - live migration from of VM from
Debian GNU/Linux 9.11 (stretch) x86_64
to:
Debian GNU/Linux 10.2 (buster) x86_64
worked fine, but trying to live migrate back failed.
Found and corrected issue to allow backwards compatibility to Stretch,
from the (hand maintained for human(s)) logs:
To allow live migration back to Debian 9.x without apparmor,
in file /etc/libvirt/qemu.conf
changed:
security_driver = "selinux"
to:
#security_driver = "selinux"
#####
security_driver = "none"
#####
and restarted libvirtd
I did end up rebooting the VM after making that change though.
Although the live migration to Buster had added the different security
bit, I wasn't able to quickly and easily determine a way to live remove
that. But the balug VM was long overdue for a reboot anyway (not on
latest security kernel update, likewise older libraries/binaries still
in process RAM from long-running daemon processes, etc.), so rebooted,
as the expedient way to both address that and the apparmor bit that
Buster had dragged into the running VM. Was then able to live migrate
back to Stretch without issue.
That was basically it, no other issues of note bumped into (did review
the install and release notes documentation, did first cover relevant
items from there - most notably network device names - and having
covered that first avoided any surprises on that).
Nameserver ns0.balug.org responds to ping, responds to DNS queries,
_but_ refuses AXFR request for zone balug.org from slave nameserver
ns1.linuxmafia.com. (Below mail from logcheck shows the first
appearance of refusal in the logs; this behaviour is still ongoing.)
[rick@linuxmafia]
~ $ dig -t axfr balug.org @96.86.170.229
;; Connection to 96.86.170.229#53(96.86.170.229) for balug.org failed:
connection refused.
[rick@linuxmafia]
~ $
----- Forwarded message from logcheck system account <logcheck(a)linuxmafia.com> -----
Date: Thu, 16 Jan 2020 12:02:01 -0800
From: logcheck system account <logcheck(a)linuxmafia.com>
To: root(a)linuxmafia.com
Subject: linuxmafia.com 2020-01-16 12:02 System Events
System Events
=-=-=-=-=-=-=
Jan 16 11:33:15 linuxmafia named[4620]: client 96.86.170.229#6229: received notify for zone 'balug.org'
Jan 16 11:33:15 linuxmafia named[4620]: zone balug.org/IN: Transfer started.
Jan 16 11:33:15 linuxmafia named[4620]: transfer of 'balug.org/IN' from 96.86.170.229#53: failed to connect: connection refused
Jan 16 11:33:15 linuxmafia named[4620]: transfer of 'balug.org/IN' from 96.86.170.229#53: Transfer completed: 0 messages, 0 records, 0 bytes, 0.028 secs (0 bytes/sec)
----- End forwarded message -----
So, I'll be changing ISP (at least for IPv4) ...
that'll mean new IPs for IPv4 ... most notably for these [L]UG domains
(and in general subdomains thereof):
balug.orgsf-lug.orgsflug.orgsflug.comsflug.netsf-lug.netsf-lug.comberkeleylug.comberkeleylug.org
A bit more generally:
By not later than end of 2019-12-19, these IPv4 IPs (range) will be
"going away"/changing:
198.144.194.232/29 198.144.194.232 - 198.144.194.239
198.144.194.232
198.144.194.233
198.144.194.234
198.144.194.235
198.144.194.236
198.144.194.237
198.144.194.238
198.144.194.239
At present, my IPv6 is provided from another ISP (tunneled via IPv4),
so I'm expecting the IPv6 IPs will remain the same (at least through
this transition anyway).
Anyway, hopefully all these changes will happen rather to quite smoothly
... but a bit early to say. E.g., as I transition ISPs, there may be
brief(ish) period between, when temporarily neither is available.
We shall see. I'll update relevant contatct(s)/lists as/when
relevant and appropriate and I'm reasonably able to do so.
Hopefully I can minimize any "user facing" impact ... if I'm
reasonably lucky, may be able to do it quite seamlessly, with
essentially no impact to "users" ... we shall see.
references/excerpts:
http://linuxmafia.com/pipermail/conspire/2019-November/009966.html