----- Forwarded message from Rick Moen rick@linuxmafia.com -----
Date: Fri, 22 Jun 2018 14:18:56 -0700 From: Rick Moen rick@linuxmafia.com To: conspire@linuxmafia.com Subject: [conspire] (forw) Re: [License-review] Fwd: [Non-DoD Source] Resolution on NOSA 2.0 Organization: If you lived here, you'd be $HOME already.
OSI had been pondering moving discussion of licence-approval discussion to the 'issue'-handling software on GitHub. I mildly responded on OSI's license-review mailing list that this might cost them some participants, and why. The following e-mail side-discussion then ensued, which might be of interest here and also bears on topics discussed during this past week's BALUG meeting.
----- Forwarded message from Rick Moen rick@linuxmafia.com -----
Date: Fri, 22 Jun 2018 14:14:34 -0700 From: Rick Moen rick@linuxmafia.com To: [a valued friend on OSI's mailing lists] Subject: Re: [License-review] Fwd: [Non-DoD Source] Resolution on NOSA 2.0 Organization: If you lived here, you'd be $HOME already.
Quoting [my friend]:
On Thu, Jun 21, 2018 at 8:20 PM, Rick Moen rick@linuxmafia.com wrote:
[1] More at http://linuxmafia.com/faq/Essays/meetup.html
I duly read this essay and am moved to protest mildly and off-list.
Without particular objection, and with (as always) interest in what you have to say, you're at some risk of arguing with positions I didn't actually take in that essay. But let's get to particulars:
While it's true that Meetup leaches (or leeches) data from its users and resells it, we make very similar agreements with "for-profit companies in New York" (where I also have the privilege of residing) as a condition of getting paid. The days of "The cash is under the MicroVAX" (which had wheels, as you may recall) are regrettably over. The ship of not-interacting-with-bastards has definitively sailed.
Life is always a series of thorny problems with grey areas, but I can spot some sharp-ish lines among some that are not so.
Let us say that BayLISA had chosen to move its membership-facing Internet operations to a VPS host. Technically, it would not _absolutely_ control its production infrastructure at that point, though it would have had root, but from my perspective as a meeting attendee and Board member, the difference in hosting would not affect me and I might not even notice. BayLISA's contractual and business-operations dealings with the VPS firm would be a back-end deal not adversely affecting me to the best of my ability to tell. (At least, otherwise would require a pretty far-fetched scenario.) So, the disgust I accumulated against BayLISA's management policies around 2013 onwards would not have applied.
Shifting the hypothetical, suppose instead that BayLISA had opted to move its membership-facing Internet operations to a shared ISP host where it had a few canned services (some virtual domain Web content, Mailman mailing lists administerable only via CPanel, SMTP aliases and webmail ditto, MTA services for the baylisa.org domain totally inaccessible for administration except by ISP NOC people who don't give a rat's ass about BayLISA specifically, and zero access to logs or ability to configure Internet-facing software other than the few things that can be tweaked to a limited degree using CPanel and the Mailman admin webUI.
I have personal experience of this exact arrangement as the listadmin of a local SF convention that made the horrific decision to move to shared hosting at Bluehost, a Utah specialty WordPress-hosting vendor that does an absolutely abysmal job at everything _but_ WordPress, and (among other problems) is on about 80% of the SMTP industry's RBLs because of a well-deserved reputation as a spamhaus (I gather, because they do little or nothing to avoid scummy customers). In this case, I would as a BayLISA member and Board member become moderately disenchanted with BayLISA's hapless inability to run consistently run competent Internet operations, with them continually falling back on 'Sorry, but our vendor doesn't support that' as an excuse. Can't tell me why GMail appears to be refusing all mail from the Bluehost MTA IP address? Sorry, we don't have access to Bluehost's logs, and can't tell you what's happening to the mail from our end. And why is it that BayLISA is forced to abandon and delete the entire cumulative archives of Mailman mailing lists if they need to be renamed or otherwise migrated? Sorry, our vendor doesn't give us access to the underlying mboxes and we don't have shell access on the shared server.
But, as bad as that situation would be, especially for a guild of sysadmins to have to act that haplessly all the time, _at least_ members and other users of BayLISA Internet services would not be required to enter into contractual business relationships with Bluehost, consent to one-sided terms of service with that firm they've never heard of, and give away a big bundle of legal rights -- just in order to deal with BayLISA. From the member / outsider perspective, BayLISA's hosting may be lame and incompetent, but at least the outsourced hosting would, like the VPS example, be a back-end deal, not one where the member is expected to be a literal legal party on a take-it-or-leave-it basis.
In this regard, a shared-hosting ISP like Bluehost has a _small_ sampling of the ills I detailed in my essay -- 'inflexible', 'is some other guy), but not the rest ('nudzh', 'has the AOL nature', 'nosy', 'expensive'). As noted, the 'is some other guy' nature is merely the normal 'back-end deal' variety where _you_ as a BayLISA member aren't required to sign a contract with Bluehost, and the suckiness of Bluehost's business services is ultimately BayLISA's problem and not directly yours.
So grey areas, but in the shared-hosting case only a light shade of grey. Add the rest of the ills, by making the outsourcing entity be specifically a Web 2.0 network-effect-obsessed, pushy, nosy, manipulative social-networking vendor like Meetup, Inc., and I maintain that it's a really dark shade of grey for the external Internet user -- however convenient it might be to BayLISA as the (actual) customer.
The point of my hypotheticals is that my objection is specifically _not_ to outsourcing of Internet infrastructure to for-profit companies. It's an objection to such outsourcing to a particularly obnoxious subcategory typified by Meetup, Inc., for the reasons cited, and expecting external people like me to put up with such firms, ones _we_ never decided we wished to deal with, being manipulated and treated like product by them. My point to us external users is that we should (and I do) stop putting up with that bullshit, and just say no. My point to institutions like my former non-profit corporation, BayLISA, Inc., is that, when you do that, the smarter and more-competent audience you hppe to reach is rather likely at some point to get fed up and say 'Bite me'.
But, actually, as I said in the essay, that is not directly the main thrust of the essay, fundamentally. The essay is merely a declaration of policy about what sort of meetup.com-listed events I'm willing to cover on the BALE calendar, which ones I'm not, and why.
(I make no apologies for taking a swing at BayLISA, along the way. They deserve it.)
In addition, whereas you can trust a corporation to be as evil as it says it will be, it is often the case that you cannot trust your fellow human beings not to be *negligently* evil.
So, let's talk for just a moment about contractual relations and the implied covenant of good faith and fair dealing, and similar matters. There's a fine point I continue to be curious about, and am unlikely to get fully answered without the ability to abuse a Lexis account for a long weekend -- but I'll mention it anyway.
I'm curious about how strong, how far reaching, how enforceable the obligations of a Web 2.0 social networking company are to protect the interest of users, given that the users are not paying for service.
On a practical level if not one of ultimate enforceability, I have reasonable but not limitless confidence in the willingness and competence of the ADSL provider, Raw Bandwidth Communications, Inc. (that issues the upstream link and static IPs for my linuxmafia.com server and my residence) to look after my interests as a customer, provided that my interests and the company's do not excessively clash. My money buys some (conditional) loyalty, _and_ I know that established caselaw means that the firm's principal, Mike Durkin, knows that if he acts in provable bad faith, I have strong remedies and not just the ability to complain about him. (Mike has a sterling reputation for customer good faith and also competence, part of the reason I've remained a customer for decades.)
I do know that I'm somewhat at risk if Mike's company _were_ hostile, or if Mike were served with a National Security Letter requiring him to log and hand over all IP information transiting between his DSLAM and my ADSL bridge device. (SSL and SSH mitigate the information leakage potential, but No Such Agency would certainly get traffic analysis data, plus anything sensitive I was stupid enough to put into plaintext traffic.
What I'm wondering is whether my strong legal enforcement rights arising from my paid-customer relationship with Mike's firm differ much from, say, the rights of a non-paying GMail user against Google, Inc.
Yes, of course they do to the extent they are derived from contractual details, where my contract with Mike is not one-sided, and a GMail user's in laughably so. But beyond that? Taking as given that a GMail user has a contractual relationship where he gives up something qualifying as consideration (his/her privacy, if nothing else), is a judge going to nonetheless side-eye a GMail user's complaints of Google violating the covenant of good faith and fair dealing (and similar protections) more than the judge would me if I sought recourse against Mike's company?
I cannot say this for certain, and cannot point to any relevant caselaw at all, but I rather suspect non-paying customers are likely to get disregarded in court, in any such action, because they simply don't come across as 'real' customers. My intuition tells me that the more one's business relationship is structured as traditional fee-for-service, the more one is likely to be able to successfully assert one's right to equitable treatment.
Negligence among one's technical community is of course a dreadful problem, but it is also a separate one that has its own obvious remedies. The error or trusting everything to just one guy, and then being surprised and disappointed when that guy breaks, is avoidable.
It is actually substantially _more_ and more easily avoidable in the _technical_ community than it is in others. Failure to take those steps in advance is a management/oversight failure, and IMO is the real problem. I speak on this point in my capacity as a senior sysadmin, where one of the difference between a junior SA and a senior one is that the senior tends to know in his/her bones that 90% of professional problems are rooted in people/planning failures.
The Scheme community is missing a big chunk of its history because the mailserver during a critical period has gone offlline seemingly permanently, and the person who ran it is ghosting us, probably because he is ashamed to admit there are no backups.
The Scheme community failed to take basic steps to ensure that there was functional, periodically tested failover to independent machine resources, and also tested offsite backups.
Life is imperfect, so I know these things usually amount to 'We didn't have time and energy to do more than a half-assed job, so that's what we did', but, seriously, consider what the low-hanging fruit would have been, just an afternoon's work on the failover problem:
1. Separate person sets up a *ix machine on static IP in his/her (separate) premises. Today, this could be on a junk P4 or PIII.
2. That person coordinates with the first guy to do daily rsyncs of all important data to the failover box.
3. In the ideal case, the failover box would also be fully configured as a hot spare, but this is a fine point. It would probably suffice for a few qualified experts to agree that all essential data are being captured daily. If production host fails or the guy in charge goes crazy, parties able to control DNS flip it, and if necessary there's a mad scramble to make the failover host fully functional. The main thing that cannot be done without is the data.
4. Some ongoing monitoring is required, to make sure failover replication doesn't silently break.
For extra credit, once every six months, _deliberately_ flip the failover switch. Nothing is quite as good at proving ability to failover as doing failover.
When I hear a group of volunteers say 'We entrusted everything important to one guy, never checked in any way, never arranged redundancy or failover, and one day that one guy disappointed us', what I see is a fundamental process failure. Didn't anyone else read Conrad's _Nostromo_?
Since then, we are hosted on Google Groups, Github, and Bitbucket. Is that a risk? It certainly is, but in my judgment a lesser risk.
Google Groups doesn't have most of the list of ills I listed in my essay. You can even subscribe a non-GMail, non-Googlemail e-mail address to one, although Google, Inc. obnoxiously makes the information about how to do that obscure and difficult to use.
As a result, if you look at the membership roster of a Google Group, you typically find that either all or almost all members are using GMail/Googlemail subscription addresses. There are a couple of these operated by LUGs in the S.F. Bay Area where rick@linuxmafia.com (me) is _literally_ the only subscriber not sucking off the teat of Auntie Google for personal mail services.
I'm curious, did your Scheme community lose most of its non-GMail, non-Googlemail users when it moved to Google Groups?
But, aside from that common... membership skew in Google Groups, the passive-aggressive hostility to non-Google mail systems built into the infrastructure, it's not a horrible choice for those not willing to just put up a spare machine on a static IP and run Exim/Postfix and Mailman. (_Plus failover_, if you have any common sense.) And on a quick glance, exactly zero of my essay's list of Meetup's ills apply to it.
I wish to also make a point in passing about the _ease_ of arranging backup and failover of relatively simple, commodity Internet services like SMTP & mailing lists based on the GNU Mailman MLM: Here in the Bay Area, I for many years hosted (and still host) on linuxmafia.com a Mailman list for a San Francisco LUG called SF-LUG. During those many years, I repeatedly pleaded with the leadership of that group to work with me to set up periodic, automated backups of the membership roster and cumulative mbox -- with no result except avoidance and excuses. A few years ago, I had a couple of months of downtime because of an improbable motherboard failure right in the middle of a major system upgrade. This happened right immediately following my losing my job, so I was a bit demoralised, and was then barraged by rather clueless complaints from SF-LUG insiders, e.g., suddenly demanding copies of my backup data but being unwilling to travel 30 miles to my house to get it. (Apparently, I was supposed to drive to _them_, or something.) The more the SF-LUG people barraged me with complaints and totally useless suggestions, the more I put off rebuilding my server, which actually wasn't that much work. I figure I delayed about two months longer before bothering to fix the situation because of the annoying commmunication than I otherwise would have, as suddenly I was rather enjoying the vacation from running an Internet server giving free-of-charge services to ingrates.
Tellingly, the reason I offered hosting to SF-LUG's mailing list in the first place is that the main guy _had_ created a Mailman list, ran it for a brief while, and had some sort of hardware failure that caused the total loss of everything. Nobody had even so much as bothered to occasionally copy the cumulative mbox off to a USB flash drive, which would have been dead-simple and was totally flippin' obvious. The guy was devastated and demoralised by the experience -- but, as became apparent later, learned nothing. I wished the group well and so gladly offered it a mailing list home to replace the lost one -- and immediately got ignored, for years and years, as I repeatedly implored them not to make _again_ the mistake that had destroyed them the first time. Which rather tells you something, nei?
When I _did_ get my linuxmafia.com server back online after about four months, a technical person _not_ among the somewhat useless people heretofore running SF-LUG stepped forward and worked out _really simple_ means to make the membershop rosters periodically sent offsite. (He also gave key help in recovering my server from its failed software upgrade, which helped motivate me to bother.) For the rosters, it's a simple weekly cron job that e-mails the rosters to a list of people. For the mbox, it's (IIRC) an rsync cron job. Very simple, highly reliable, _and_ provably that is totally sufficient to rehost the SF-LUG mailing list if a meteor lands on my server.
Low-hanging fruit, you will note. Not clever, not even totally comprehensive (e.g., subscriber-specific state data are not backed up). But sufficient. As the late Adam Osborne of Osborne Computer used to say, 'Adequacy is sufficient.'
As to GitHub and Bitbucket, my vague recollection is that public access to the hosted repos is possible without a signup and login. The latter entail, as mentioned in my essay, a new business relationship with a company you never planned specifically to deal with at all, agreeing to a one-sided service agreement, and being subjected to a bunch of rules dictated to you by a bunch of strangers (who, again, you never heard of otherwise or particularly wanted to deal with). If I were a member of the Scheme community, and wanted, say, to share code, I'd have a problem with that, and might very well say 'no, thanks'. I might then treat the organised Scheme community as damage and route around it (RIP, Mr. Barlow), e.g., share my code with other individuals via direct exchange between their git repos and mine, and they would of course be free to share that further with GitHub and Bitbucket if they're willing to put up with corporate B&D, but that'd be not my problem.
I have an analogy from my own experience in the community: I'm a longtime HOWTO maintainer with the Linux Documentation Project. About five years ago, a new guy at LDP said that he'd decided LDP wasn't going to process HOWTO updates via incoming SMTP submissions of SGML any more, and we should all please just check them into GitHub.
I said, I don't have a GitHub account and have no intention of entering a business relationship with GitHub, Inc. to have one. If LDP is interested in my further HOWTO updates, I'm open to any other reasonable arrangement, and perhaps someone on LDP staff who _does_ have a GitHub account will be kind enough to check in my updates when I send them, as always, to submit@en.tldp.org .
I've continued to do just that, and the mail gets delivered, but LDP has not in the last five years bothered to check my SGML into its repo. So, basically _now_ the more-recent versions of my HOWTOs and the SGML source for them are available on linuxmafia.com and from anywhere that chooses to mirror them (licensing encourages that), but LDP is ignoring them. I figure it's their loss, and the community's, but that their imposing of a sudden demand for external business relations was not reasonable, and that the resulting damage to LDP from their poor decisions was regrettable but ultimately self-inflicted.
----- End forwarded message -----
_______________________________________________ conspire mailing list conspire@linuxmafia.com http://linuxmafia.com/mailman/listinfo/conspire
----- End forwarded message -----
Just because, I'll sharpen some of those points that are relevant to Tuesday's BALUG discussion.
Speaker Liz Krumbach, after her excellent talk about DC/OS, gave generously of her time for a Q&A session with attendees sitting around the table with her at Henry's Hunan.
In passing, Liz mentioned (and I'll paraphrase, here, and apologise in advance if I misstate anything) that, although her heart is still in the notion of independent Internet hosting by individuals and community institutions, the reality is that pretty much everything's moving to some form of hosted cloud storage, and that even she and her husband now freely put some fairly sensitive personal data in Google Docs these days, even though they know all about the drawbacks. (Fair enough. That's her perception, and her and her husband's choice, and I know Liz well enough to understand her actions to be thoughtful.)
She also pointed out, again perfectly reasonably, that increasingly most people don't even host their own files in practically any category, streaming their video content, streaming their music, and so on. Even though they _could_ be more independent.
Two BALUG attendees, whom I could name (but will be nice), _then_ chimed in, going rather far beyond what Liz said (while acting as if they were agreeing), says that anyone who attempts to run a personal Internet server, and they said they knew there were some sitting around the table, were certifiable lunatics, and that it was utterly impractical.
(Hi, I'm Rick Moen, listadmin of most of the Bay Area's Linux User Group mailing lists and owner/operator of linuxmafia.com / unixmercenary.net, home of two LUGs, a major community calendar, etc. on old castoff machines I run literally on fixed IP on home ADSL in my garage. I've done this sort of thing since the 1980s, when as a staff accountant with no IT background, I started running my own Internet servers because nobody told me I couldn't. I just threw things together, tinkered a little, learned on the fly, and it worked. _Later_, I became a system administrator as my profession.)
The Scheme community failed to take basic steps to ensure that there was functional, periodically tested failover to independent machine resources, and also tested offsite backups.
The core of this is: Do you have a creditable plan of action for each of the things that might be plausibly be expected to go wrong? Planning for Internet infrastructure, no different from planning for _any_ organised activity, requires contingency planning. Running a department, or pretty much anything else? Then, make sure people know what to do or not do if there's a fire. Or a power outage. Or a flooded floor in an office building because the sprinklers went off or the pipes broke upstairs.
In the case of computers and network infrastructure, it's the same principle, and thus not really different.
Hardware: Moving-rust mass storage, fans, cables, and input devices wear out most often, from mechanical stress. SSDs wear out too, from a different type of stress. Do you have a plan when, not if, that happens?
Software: Upgrades can go tragically bottom-side up if coincidental to a hardware failure. (Ask me how I know.) Tired sysadmins, or any junior SA armed with root access, is even more dangerous to systems than a programmer bearing a screwdriver. Mishaps happen. Do you have a documented plan for _what_ to back up and exactly how to restore, and have it somewhere safe? (I do, and archive.org and cache.google.com back it up for me: http://linuxmafia.com/faq/Admin/linuxmafia.com-backup.html )
Is your DNS free of single points of failure, and multiple people able and willing to administer the domain and DNS?
Do you have either failover or a credible plan to bring replacement hardware / software online if it's important but not urgent? Like:
Life is imperfect, so I know these things usually amount to 'We didn't have time and energy to do more than a half-assed job, so that's what we did', but, seriously, consider what the low-hanging fruit would have been, just an afternoon's work on the failover problem:
- Separate person sets up a *ix machine on static IP in his/her
(separate) premises. Today, this could be on a junk P4 or PIII.
- That person coordinates with the first guy to do daily rsyncs
of all important data to the failover box.
- In the ideal case, the failover box would also be fully configured
as a hot spare, but this is a fine point. It would probably suffice for a few qualified experts to agree that all essential data are being captured daily. If production host fails or the guy in charge goes crazy, parties able to control DNS flip it, and if necessary there's a mad scramble to make the failover host fully functional. The main thing that cannot be done without is the data.
- Some ongoing monitoring is required, to make sure failover
replication doesn't silently break.
For extra credit, once every six months, _deliberately_ flip the failover switch. Nothing is quite as good at proving ability to failover as doing failover.
People look at the rickety old PIII in my garage and think 'Eek, that's fragile.' Sure it is. But highly replaceable, as witness the fact that linuxmafia.com has gone through about eight motherboards in a variety of throwaway, worthless server boxes, and as many sets of hard drives.
It certainly can die. And then I just deploy another (and this time better) one.
If I wanted to, if I weren't too lazy to do the work, it wouldn't be difficult to do hot-sparing. Constructing the same server twice is really no more difficult than doing it once, and then at your leisure you can set up periodic rsync backup scripts to update the spare box's data, to slave the spare's database process off the master's so it's always kept updated, and so on. Resources required: Two throwaway server machines, two static IP addresses, a little thinking and testing, part of a weekend. For extra credit, have a hot-spare offsite somewhere.
For additional extra credit, enroll all instructions for constructing a spare machine from bare metal (or minimal OS install) into your choice of configuration management software (Chef, Puppet, Ansible, etc.) to make spinning one up even faster and easier.
If this is lunacy, it's the lunacy that gets things done. And the point is: It's not friggin' brain surgery, including the high availability / failover pieces that give you for free all the _important_ bits of what the big boys charge money for. The trick is to _know what you're doing_ and verify that the important parts are covered.
And knowing what you're doing isn't that hard. I figured it out just by tinkering and paying attention back when I was a _staff accountant_. No, you won't achieve five nines of reliability, but you won't need to; you're not running NAQDAQ. You won't be completely immune to DDoSing, but you won't need to; you're not Cloudfront.net.
What you _can_ do without too much difficulty as an aspiring technical Linux user -- because this happens to be what Linux is really good at -- is basic, not too fancy, generic Internet services that Linux has done extremely well for around a quarter century: e-mail, mailing lists, regular ol' Web sites, ssh, a medley of other things. Making the contents be easily portable indefinitely comes along for free, which is definitely not true of more-complex online content and fancier hosted services. And doing sysadminly assurance of failover, backup/restore, etc. is not a lot harder.
Of the two guys who called anyone who makes the effort and walks the walk of open source and autonomous Internet presence a 'lunatic' rely, one is a San Franciscan who relies on a partimus.org Internet virthost that he appears to not do anything to keep operating (that host being not even an autonomous Internet server, but rather just a virtual-hosted domain on Dreamhost, the specialty WordPress hosting provider terrible at e-mail/mailing lists that BALUG's admins, mostly Michael Paoli and partly me, ran a special project just to get BALUG off their hosting). The other is a Berkeley guy with an ever-changing name who sharecrops for Google, i.e., using nothing but free-of-charge Google services. Neither of them has to my knowledge ever even _aspired_ to learn how to set up and run Internet infrastructure for the Linux Internet community, yet both a sure that someone who does is a 'lunatic', and that what many such people have done pretty easily and for peanuts in monetary outlay is hopelessly impractical.
I see. Right. Puts it in context.
Quoting Rick Moen (rick@linuxmafia.com):
Of the two guys who called anyone who makes the effort and walks the walk
of open source and autonomous Internet presence a 'lunatic' rely, one is a San Franciscan who relies on a partimus.org Internet virthost that he appears to not do anything to keep operating (that host being not even an autonomous Internet server, but rather just a virtual-hosted domain on Dreamhost, the specialty WordPress hosting provider terrible at e-mail/mailing lists that BALUG's admins, mostly Michael Paoli and partly me, ran a special project just to get BALUG off their hosting).
After enduring all kinds of s**t dealing with DreamHost in the past (which I've detailed here & won't get into again), all I can say is anyone or any ISP stupid enough to continue to use DreamHost deserves whatever happens to them, whether it be server down time, horrible e-mail/mailing list hosting or just plain old inept tech support (when an issue does occur).
-th
On Fri, Jun 22, 2018 at 10:36 PM, Rick Moen rick@linuxmafia.com wrote:
Quoting Rick Moen (rick@linuxmafia.com):
Speaker Liz Krumbach, after her excellent talk about DC/OS, gave generously of her time for a Q&A session with attendees sitting around the table with her at Henry's Hunan.
Liz _Joseph_. I'm sorry, Liz.
BALUG-Talk mailing list BALUG-Talk@lists.balug.org https://lists.balug.org/cgi-bin/mailman/listinfo/balug-talk