[BALUG-Talk] (forw) Re: [License-review] Fwd: [Non-DoD Source] Resolution on NOSA 2.0
Fri Jun 22 22:32:56 PDT 2018
Just because, I'll sharpen some of those points that are relevant to
Tuesday's BALUG discussion.
Speaker Liz Krumbach, after her excellent talk about DC/OS, gave
generously of her time for a Q&A session with attendees sitting around
the table with her at Henry's Hunan.
In passing, Liz mentioned (and I'll paraphrase, here, and apologise in
advance if I misstate anything) that, although her heart is still in the
notion of independent Internet hosting by individuals and community
institutions, the reality is that pretty much everything's moving to
some form of hosted cloud storage, and that even she and her husband now
freely put some fairly sensitive personal data in Google Docs these
days, even though they know all about the drawbacks. (Fair enough.
That's her perception, and her and her husband's choice, and I know Liz
well enough to understand her actions to be thoughtful.)
She also pointed out, again perfectly reasonably, that increasingly most
people don't even host their own files in practically any category,
streaming their video content, streaming their music, and so on. Even
though they _could_ be more independent.
Two BALUG attendees, whom I could name (but will be nice), _then_ chimed in,
going rather far beyond what Liz said (while acting as if they were
agreeing), says that anyone who attempts to run a personal Internet
server, and they said they knew there were some sitting around the
table, were certifiable lunatics, and that it was utterly impractical.
(Hi, I'm Rick Moen, listadmin of most of the Bay Area's Linux User Group
mailing lists and owner/operator of linuxmafia.com / unixmercenary.net,
home of two LUGs, a major community calendar, etc. on old castoff
machines I run literally on fixed IP on home ADSL in my garage. I've
done this sort of thing since the 1980s, when as a staff accountant with
no IT background, I started running my own Internet servers because
nobody told me I couldn't. I just threw things together, tinkered a
little, learned on the fly, and it worked. _Later_, I became a system
administrator as my profession.)
> The Scheme community failed to take basic steps to ensure that there was
> functional, periodically tested failover to independent machine
> resources, and also tested offsite backups.
The core of this is: Do you have a creditable plan of action for each
of the things that might be plausibly be expected to go wrong? Planning
for Internet infrastructure, no different from planning for _any_
organised activity, requires contingency planning. Running a
department, or pretty much anything else? Then, make sure people know
what to do or not do if there's a fire. Or a power outage. Or a
flooded floor in an office building because the sprinklers went off or
the pipes broke upstairs.
In the case of computers and network infrastructure, it's the same
principle, and thus not really different.
Hardware: Moving-rust mass storage, fans, cables, and input devices
wear out most often, from mechanical stress. SSDs wear out too, from a
different type of stress. Do you have a plan when, not if, that
Software: Upgrades can go tragically bottom-side up if coincidental to a
hardware failure. (Ask me how I know.) Tired sysadmins, or any junior
SA armed with root access, is even more dangerous to systems than a
programmer bearing a screwdriver. Mishaps happen. Do you have a
documented plan for _what_ to back up and exactly how to restore, and
have it somewhere safe? (I do, and archive.org and cache.google.com
back it up for me:
Is your DNS free of single points of failure, and multiple people able
and willing to administer the domain and DNS?
Do you have either failover or a credible plan to bring replacement
hardware / software online if it's important but not urgent? Like:
> Life is imperfect, so I know these things usually amount to 'We didn't
> have time and energy to do more than a half-assed job, so that's what we
> did', but, seriously, consider what the low-hanging fruit would have
> been, just an afternoon's work on the failover problem:
> 1. Separate person sets up a *ix machine on static IP in his/her
> (separate) premises. Today, this could be on a junk P4 or PIII.
> 2. That person coordinates with the first guy to do daily rsyncs
> of all important data to the failover box.
> 3. In the ideal case, the failover box would also be fully configured
> as a hot spare, but this is a fine point. It would probably suffice
> for a few qualified experts to agree that all essential data are being
> captured daily. If production host fails or the guy in charge goes
> crazy, parties able to control DNS flip it, and if necessary there's a
> mad scramble to make the failover host fully functional. The main thing
> that cannot be done without is the data.
> 4. Some ongoing monitoring is required, to make sure failover
> replication doesn't silently break.
> For extra credit, once every six months, _deliberately_ flip the
> failover switch. Nothing is quite as good at proving ability to
> failover as doing failover.
People look at the rickety old PIII in my garage and think 'Eek, that's
fragile.' Sure it is. But highly replaceable, as witness the fact that
linuxmafia.com has gone through about eight motherboards in a variety of
throwaway, worthless server boxes, and as many sets of hard drives.
It certainly can die. And then I just deploy another (and this time
If I wanted to, if I weren't too lazy to do the work, it wouldn't be
difficult to do hot-sparing. Constructing the same server twice is
really no more difficult than doing it once, and then at your leisure
you can set up periodic rsync backup scripts to update the spare box's
data, to slave the spare's database process off the master's so it's
always kept updated, and so on. Resources required: Two throwaway
server machines, two static IP addresses, a little thinking and testing,
part of a weekend. For extra credit, have a hot-spare offsite
For additional extra credit, enroll all instructions for constructing a
spare machine from bare metal (or minimal OS install) into your choice
of configuration management software (Chef, Puppet, Ansible, etc.) to
make spinning one up even faster and easier.
If this is lunacy, it's the lunacy that gets things done. And the point
is: It's not friggin' brain surgery, including the high availability /
failover pieces that give you for free all the _important_ bits of what
the big boys charge money for. The trick is to _know what you're doing_
and verify that the important parts are covered.
And knowing what you're doing isn't that hard. I figured it out just by
tinkering and paying attention back when I was a _staff accountant_.
No, you won't achieve five nines of reliability, but you won't need to;
you're not running NAQDAQ. You won't be completely immune to DDoSing,
but you won't need to; you're not Cloudfront.net.
What you _can_ do without too much difficulty as an aspiring technical
Linux user -- because this happens to be what Linux is really good at --
is basic, not too fancy, generic Internet services that Linux has done
extremely well for around a quarter century: e-mail, mailing lists,
regular ol' Web sites, ssh, a medley of other things. Making the
contents be easily portable indefinitely comes along for free, which is
definitely not true of more-complex online content and fancier hosted
services. And doing sysadminly assurance of failover, backup/restore,
etc. is not a lot harder.
Of the two guys who called anyone who makes the effort and walks the walk of
open source and autonomous Internet presence a 'lunatic' rely,
one is a San Franciscan who relies on a partimus.org Internet virthost
that he appears to not do anything to keep operating (that host being
not even an autonomous Internet server, but rather just a virtual-hosted
domain on Dreamhost, the specialty WordPress hosting provider terrible
at e-mail/mailing lists that BALUG's admins, mostly Michael Paoli and
partly me, ran a special project just to get BALUG off their hosting).
The other is a Berkeley guy with an ever-changing name who sharecrops
for Google, i.e., using nothing but free-of-charge Google services.
Neither of them has to my knowledge ever even _aspired_ to learn how to
set up and run Internet infrastructure for the Linux Internet community,
yet both a sure that someone who does is a 'lunatic', and that what many
such people have done pretty easily and for peanuts in monetary outlay
is hopelessly impractical.
I see. Right. Puts it in context.
Cheers, "A recursive .sig
Rick Moen Can impart wisdom and truth.
email@example.com Call proc signature()"
McQ! (4x80) -- WalkingTheWalk on Slashdot
More information about the BALUG-Talk