Amusing EximConfig overzealousness.
----- Forwarded message from Mail Delivery System Mailer-Daemon@linuxmafia.com -----
Date: Fri, 22 Jun 2018 22:11:56 -0700 From: Mail Delivery System Mailer-Daemon@linuxmafia.com To: rick@linuxmafia.com Subject: Mail delivery failed: returning message to sender
This message was created automatically by mail delivery software.
A message that you sent could not be delivered to one or more of its recipients. This is a permanent error. The following address(es) failed:
balug-talk@lists.balug.org SMTP error from remote mail server after end of data: host mx.lists.balug.org [198.144.194.238]: 550-Rejected message body text: URL link to prohibited file: 550-http://linuxmafia.com/faq/Admin/linuxmafia.com 550-. 550-[EximConfig-2.5-balug.org-Body-Reject] 550-. 550-.Verify: verified-29120-balug-talk@lists.balug.org 550-Contact: postmaster@balug.org 550-. 550-Sorry, your message has been rejected because 550-its body text/content is prohibited for the 550-above reason. 550-. 550-We apologise if you have sent a legitimate 550-message and it has been blocked. If this is 550-the case, please re-send adding verified-29120- 550-to the beginning of the E-mail address of each 550-recipient. If you do this, your message will 550-get through these restrictions. 550-. 550-If your message has been incorrectly blocked, 550-please let us know at the above contact address. 550 .
------ This is a copy of the message, including all the headers. ------
Return-path: rick@linuxmafia.com Received: from rick by linuxmafia.com with local (Exim 4.72) (envelope-from rick@linuxmafia.com) id 1fWapc-00029x-Cx for balug-talk@lists.balug.org; Fri, 22 Jun 2018 22:11:28 -0700 Date: Fri, 22 Jun 2018 22:11:28 -0700 From: Rick Moen rick@linuxmafia.com To: balug-talk@lists.balug.org Subject: Re: [BALUG-Talk] (forw) Re: [License-review] Fwd: [Non-DoD Source] Resolution on NOSA 2.0 Message-ID: 20180623051128.GD32401@linuxmafia.com References: 20180622212441.GA32401@linuxmafia.com MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: 20180622212441.GA32401@linuxmafia.com Organization: If you lived here, you'd be $HOME already. X-Mas: Bah humbug. X-Clacks-Overhead: GNU Terry Pratchett User-Agent: Mutt/1.5.20 (2009-06-14) X-SA-Exim-Connect-IP: <locally generated> X-SA-Exim-Mail-From: rick@linuxmafia.com X-SA-Exim-Scanned: No (on linuxmafia.com); SAEximRunCond expanded to false
Just because, I'll sharpen some of those points that are relevant to Tuesday's BALUG discussion.
Speaker Liz Krumbach, after her excellent talk about DC/OS, gave generously of her time for a Q&A session with attendees sitting around the table with her at Henry's Hunan.
In passing, Liz mentioned (and I'll paraphrase, here, and apologise in advance if I misstate anything) that, although her heart is still in the notion of independent Internet hosting by individuals and community institutions, the reality is that pretty much everything's moving to some form of hosted cloud storage, and that even she and her husband now freely put some fairly sensitive personal data in Google Docs these days, even though they know all about the drawbacks. (Fair enough. That's her perception, and her and her husband's choice, and I know Liz well enough to understand her actions to be thoughtful.)
She also pointed out, again perfectly reasonably, that increasingly most people don't even host their own files in practically any category, streaming their video content, streaming their music, and so on. Even though they _could_ be more independent.
Two BALUG attendees, whom I could name (but will be nice), _then_ chimed in, going rather far beyond what Liz said (while acting as if they were agreeing), says that anyone who attempts to run a personal Internet server, and they said they knew there were some sitting around the table, were certifiable lunatics, and that it was utterly impractical.
(Hi, I'm Rick Moen, listadmin of most of the Bay Area's Linux User Group mailing lists and owner/operator of linuxmafia.com / unixmercenary.net, home of two LUGs, a major community calendar, etc. on old castoff machines I run literally on fixed IP on home ADSL in my garage. I've done this sort of thing since the 1980s, when as a staff accountant with no IT background, I started running my own Internet servers because nobody told me I couldn't. I just threw things together, tinkered a little, learned on the fly, and it worked. _Later_, I became a system administrator as my profession.)
The Scheme community failed to take basic steps to ensure that there was functional, periodically tested failover to independent machine resources, and also tested offsite backups.
The core of this is: Do you have a creditable plan of action for each of the things that might be plausibly be expected to go wrong? Planning for Internet infrastructure, no different from planning for _any_ organised activity, requires contingency planning. Running a department, or pretty much anything else? Then, make sure people know what to do or not do if there's a fire. Or a power outage. Or a flooded floor in an office building because the sprinklers went off or the pipes broke upstairs.
In the case of computers and network infrastructure, it's the same principle, and thus not really different.
Hardware: Moving-rust mass storage, fans, cables, and input devices wear out most often, from mechanical stress. SSDs wear out too, from a different type of stress. Do you have a plan when, not if, that happens?
Software: Upgrades can go tragically bottom-side up if coincidental to a hardware failure. (Ask me how I know.) Tired sysadmins, or any junior SA armed with root access, is even more dangerous to systems than a programmer bearing a screwdriver. Mishaps happen. Do you have a documented plan for _what_ to back up andexactly how to restore, and have it somewhere safe? (I do, and archive.org and cache.google.com back it up for me: http://linuxmafia.com/faq/Admin/linuxmafia.com-backup.html )
Is your DNS free of single points of failure, and multiple people able and willing to administer the domain and DNS?
Do you have either failover or a credible plan to bring replacement hardware / software online if it's important but not urgent? Like:
Life is imperfect, so I know these things usually amount to 'We didn't have time and energy to do more than a half-assed job, so that's what we did', but, seriously, consider what the low-hanging fruit would have been, just an afternoon's work on the failover problem:
- Separate person sets up a *ix machine on static IP in his/her
(separate) premises. Today, this could be on a junk P4 or PIII.
- That person coordinates with the first guy to do daily rsyncs
of all important data to the failover box.
- In the ideal case, the failover box would also be fully configured
as a hot spare, but this is a fine point. It would probably suffice for a few qualified experts to agree that all essential data are being captured daily. If production host fails or the guy in charge goes crazy, parties able to control DNS flip it, and if necessary there's a mad scramble to make the failover host fully functional. The main thing that cannot be done without is the data.
- Some ongoing monitoring is required, to make sure failover
replication doesn't silently break.
For extra credit, once every six months, _deliberately_ flip the failover switch. Nothing is quite as good at proving ability to failover as doing failover.
People look at the rickety old PIII in my garage and think 'Eek, that's fragile.' Sure it is. But highly replaceable, as witness the fact that linuxmafia.com has gone through about eight motherboards in a variety of throwaway, worthless server boxes, and as many sets of hard drives.
It certainly can die. And then I just deploy another (and this time better) one.
If I wanted to, if I weren't too lazy to do the work, it wouldn't be difficult to do hot-sparing. Constructing the same server twice is really no more difficult than doing it once, and then at your leisure you can set up periodic rsync backup scripts to update the spare box's data, to slave the spare's database process off the master's so it's always kept updated, and so on. Resources required: Two throwaway server machines, two static IP addresses, a little thinking and testing, part of a weekend. For extra credit, have a hot-spare offsite somewhere.
For additional extra credit, enroll all instructions for constructing a spare machine from bare metal (or minimal OS install) into your choice of configuration management software (Chef, Puppet, Ansible, etc.) to make spinning one up even faster and easier.
If this is lunacy, it's the lunacy that gets things done. And the point is: It's not friggin' brain surgery, including the high availability / failover pieces that give you for free all the _important_ bits of what the big boys charge money for. The trick is to _know what you're doing_ and verify thta the important parts are covered.
And knowing what you're doing isn't that hard. I figured it out just by tinkering and paying attention back when I was a _staff accountant_. No, you won't achieve five nines of reliability, but you won't need to; you're not running NAQDAQ. You won't be completely immune to DDoSing, but you won't need to; you're not Cloudfront.net.
What you _can_ do without too much difficulty as an aspiring technical Linux user -- because this happens to be what Linux is really good at -- is basic, not too fancy, generic Internet services that Linux has done extremely well for around a quarter century: e-mail, mailing lists, regular ol' Web sites, ssh, a medley of other things. Making the contents be easily portable indefinitely comes along for free, which is definitely not true of more-complex online content and fancier hosted services. And doing sysadminly assurance of failover, backup/restore, etc. is not a lot harder.
Of the two guys who called anyone who makes the effort and walks the walk of open source and autonomous Internet presence a 'lunatic' rely, one is a San Franciscan who relies on a partimus.org Internet virthost that he appears to not do anything to keep operating (that host being not even an autonomous Internet server, but rather just a virtual-hosted domain on Dreamhost, the specialty WordPress hosting provider terrible at e-mail/mailing lists that BALUG's admins, mostly Michael Paoli and partly me, ran a special project just to get BALUG off their hosting). The other is a Berkeley guy with an ever-changing name who sharecrops for Google, i.e., using nothing but free-of-charge Google services. Neither of them has to my knowledge ever even _aspired_ to learn how to set up and run Internet infrastructure for the Linux Internet community, yet both a sure that someone who does is a 'lunatic', and that what many such people have done pretty easily and for peanuts in monetary outlay is hopelessly impractical.
I see. Right. Puts it in context.