[BALUG-Talk] hardware operations, etc. Re: operation appears to have been a success: hardware issues
Michael Paoli
Michael.Paoli@cal.berkeley.edu
Mon Oct 30 20:18:31 PDT 2017
> From: "Rick Moen" <rick@linuxmafia.com>
> To: balug-talk@lists.balug.org
> Subject: Re: [BALUG-Talk] operation appears to have been a success:
> hardware issues
> Date: Sun, 29 Oct 2017 10:16:21 -0700
> Quoting Michael Paoli (Michael.Paoli@cal.berkeley.edu):
>> Ah, and appears the operation has been quite successful ...
> Good job!
Thanks. :-)
>> Also completed successful live migration of the balug VM back from
>> physical host "vicki" ... which nominally resides under the brain
>> of the patient, but was live-migrated away to "vicki" while the
>> patient's brain was in suspended animation for a while during
>> the patient's operation.
> See, this is one of the reasons I am trying to move production systems
> into VMs: It means you can snapshot systems and migrated them between
> hardware-disparate physical hosts as necessary.
Yup, helluva lot of advantages to VMs. Not *the* answer to
*everything*, but in many cases, the best answer/solution, or among the
best.
> Starting years ago, I repeatedly tried to have a discussion on the
> SVLUG mailing list about hardware selection and system design for
> _home servers_ that took advantage of recent advances, and it was a bust
> because all participants did was advocate I buy whatever they'd been
> buying but ignoring both the home-server role's requirements and the
> recent advances.
Well, ... folks will tend to have their biases/preferences, and human
nature 'n all that, they'll want to feel good about what they bought,
especially if they put much resource into it ($$ and/or otherwise),
so ... confirmation bias, etc. ... they'll often tend to advocate for
what they chose/bought/did ... even when quite sub-optimal. :-/ Though
too, some will want to go into the "horror stories" of what they
did/tried/got.
> I suggested a server inside my home should ideally have no moving parts
> (no fans, SSDs instead of hard drives) so as to not be noisy, and be
> low-power and low-heat: They suggested units with lots of fans and hard
> drives. I suggested it needed to do RAID1 on real, i.e., better than
> USB, mass-storage interfaces: They suggested Raspberry Pis. I
> suggested it needed to be x86_64 to avoid requiring out-of-tree kernels
> (security risk if there's a kernel exploit requiring an updated kernel
> _now_) and special-snowflake bootloaders: They suggested ARM.
Well, ,.. "it depends" ... a bit, anyway. But yes, what you
suggest/advocate, darn good solution - and among the best - if not *the*
best, for most typical home installations. Moving parts are noise -
bad for home - especially "living quarters" thereof, and also most
common failure points. I recall some years back, dealt with supporting
a fairly large number of installed PCs in retail environments - about
50% of the failures were power supply - typically because the fan
failed or slowed too much, then the heat killed the power supply. Next
in line, probably about 30% or somewhat more, was failed hard drives.
The other hardware failures were a bit miscellaneous, but moving parts
failures were or caused about 80% of the hardware failures. That's
still relatively true today. E.g. still deal with lots of computers in
data centers, and, what most commonly fails? The moving parts - hard
drives and fans ... though as hard drives are increasingly being
replaced by SSD, hard drive failures are becoming less common as their
count goes down. The other annoying failures I see too commonly on at
least some enterprise equipment - batteries. E.g. I've seen many series
of systems, where hard drives are hot swappable, there's RAID controller
with battery backed cache, and, over time, the batteries fail at higher
rates than the hard drives(!), and the batteries are *not* hot swappable
... ugh. But ... at least the battery failures "only" cause a
performance hit 'till they're replaced ... but for some
systems/workloads, that's a big deal - even critical.
And yes, good to keep the power requirements reasonably down. Most home
systems don't need to be sucking that much power most of the time. Most
home users ought not be seriously mining Bitcoin at home. 8-O If they
really want to do that, it's gonna burn a lot of power, and make a lot
of noise - and also have significantly different hardware requirements
and optimizations. And, for home, if it really is as server, mostly
just used via network and such, and if there's suitable place to tuck it
away where noise isn't really an issue, then sometimes system with
fan(s), hard drives, etc. may be fine - for that particular
installation. But even then, still has the moving parts failure
downsides, etc.
ARM? Well, "it depends". ;-) But yeah, for the most part, the further
one gets from x86 (and these days x86_64/amd64) on Linux kernels, the
more one moves towards the fringes on support ... which is not best
place to be. So for now, and probably long time to come, best support
(and quickest fixes for issues), with Linux, will almost always be
x86_64/amd64, with i486 (very) close behind, and everything else,
trailing along sometime after ... and at varying speeds/levels -
depending how far out on the fringe.
Hard drive vs. SSD? I don't think SSD has fully taken over for hard
drives ... quite yet. But the days(/years?) remaining for hard drives
are numbered. In terms of cost per unit storage, wasn't all that many
years ago, SSD was about 10x as expensive as hard drive. Now it's
closer to 5x ... expect that trend to continue. Once SSD gets to right
around as inexpensive as spinning rust, the hard drive will go the way
of the floppy and other obsoleted technologies.
> And, the one that really disappointed me: I suggested it'd be cool to
> run at least a couple of VMs, production host alongside beta, and that
> for a production full-service Internet host I really cannot do it in
> less than 512 MB RAM for a production host because of SMTP antispam: My
> friend Joey Hess, one of the key people in the Debian Project at the
> time, responded saying he uses a BeagleBone Black, a nice small
> ARM-based server he uses -- that maxes out at 512 MB RAM total.
Yeah, the "small" systems, such as Pi - good for *some* things, but
quite limited. A lot of things just don't work well - or at all - with
such constricted RAM. Newer Pis do have more RAM ... but it's still not
all that much.
> I pointed out that this just didn't support the VM-based architecture
> I thought was the future way forward (unless there were some way in the
> 2010s to do Web serving, SMTP, mailing lists, DNS nameservice, ssh, and
> the usual server tasks in radically less RAM), and all people could say
> was 'Shift tactics and divide processes up among a farm of tiny
> computers.' Eh, no, I don't think a data centre of Raspberry Pis is a
> reasonable alternative. So, I've had to mostly drive iterations of that
> discussion myself, first on the SVLUG list and more recently on CABAL's.
Yes, VMs are lovely. :-) Not a panacea, but the "right"
answer/solution to a whole lot 'o problems/challenges.
> Anyway, this is why I think small, silent or quiet, x86_64-based
> machines able to run at least 8GB RAM are coo and the proper way
> forward: primarily because hypervisors solve a bunch of problems
> including live-migration, and decoupling what you run from the specific
> hardware chipsets you run it on.
Yup, ... *the* best answer a whole lot 'o the time.
> But this weekend's motherboard purchase is actually for a family car, FWIW,
> because the current one went Pfft!
> https://store.allcomputerresources.com/20chptcrpcme10.html
>
> Ca-ching!
Egad. Not all that long ago, I sold a "perfectly good" running car (no
major mechanical issues, basically just some minor issues, ran fine,
good shape, etc.) ... for ... yeah, sold it for less than twice the cost
of that car computer. I know some folks, quite advocate for modern
cars, to buy a spare computer/controller for the car - at least for
some/many vehicles. Notably for many, when that thing dies, car is
basically dead in the water, and one is quite stuck if one doesn't have
a spare.
> Man, my dad, who before he became a Pan American World Airways captain
> sold and worked on 'computers' when they were made by Marchant
> (https://en.wikipedia.org/wiki/Marchant_calculator) in Oakland would
> hardly recognise the current world, where you end up having to reboot
> your oven, and where your automatic transmission misbehaves because the
> car's motherboard has burned itself out.
Oye, ... technology sure does march along! First computer my dad ever
worked on ... magnetic drum storage. First computer I ever programmed,
mass storage for backing up and loading programs - cassette tape. 8-O
My first x86 computer ... floppies (360/1200 & 1440 KiB) and a hard
drive (~150M). MicroSDHC is up to ~300G ... so that's about a million
times the storage of 5.25" DD floppy, shrunk down to the size of about
my pinky fingernail. Not to mention the increase in speed, reliability,
and no moving parts. And drives? Yesterday I bought a 2T 2.5" SSD.
:-) More than 10,000 times the capacity of my first hard drive, and
much smaller size, power, and much greater speed.
So ... some other miscellaneous bits ... quite pleased with my Dell find
from earlier this year. Just took a peek on Ebay - looks like fair
market value of that used hardware is probably roughly around $300+-
some fair bit depending on configuration, particular model, condition,
etc. If a laptop still holds that much value at >~=3 years old, that's
sayin' somethin' right there. Anyway, so far, quite favorably
impressed. And darn nice to have integrated 3 buttons for the pointer
device ... no more of that middle button emulation with the two button
press ... ugh. And stick-point (seems every manufacturer calls it
something different ... trademarks?) pointer thingy - good to also have
that - always rather missed being without, as I'd used such laptops with
that type of pointer device, before trackpads started showing up on
laptops. Oh, ... and quiet? Pretty darn quiet, ... sometimes -
actually much of the time, dead silent quiet. Yes, it has fans. But
... with SSD (thus far just the ~150G one from older laptop), and pretty
good thermal design (lots of heat piping and such - it's designed to be
able to dissipate quite a bit of power ... power supply is bit over
200W! ... brick of a thing) ... anyway, with my more typical loads
*much* lighter than that - much of the time the thing isn't even running
either of the fans. Contrast that with the laptop I had been using, and
the fan was always going, 100% of the time, anytime the laptop was
powered up. I think also on Dell, larger size, metal casing - all also
help with heat distribution and dissipation.
And ... to play devil's advocate ... well, just a wee bit ;-)
Raspberry Pi? Well, ... "it depends". *Sometimes* the Pi is a best
solution/fit. I'll give a couple scenarios. I've a project in mind,
want it to be a small, isolated, independent, and secure as feasible, at
least for a basic home-based project. Pi is a good fit for it, as it's
economical for that, low power, and highly independent. What I have in
mind for that is actually a pair of Pis. One "front end", connected to
The Internet, reasonably hardened as feasible, but that would mostly be
a fairly simple web/API that would talk to a secured back-end. And ...
the secured back-end? Another Pi. And the communication between them?
Serial only - simple interface and protocol - no PPP or anything like
that, just a simple login/interact quite restricted and hardened API -
so that'd be the secured back-end. Now, ... could that be done with
virtual machines? Yes, ... but ... as secure and isolated as that? No.
Virtual machine technology on Linux, mostly all fine and wonderful, but,
also kind'a the relatively shiny new kid on the block. There have been
security issues, ... may likely still be some - at least on occasion -
for some while to come. It's just not as secure and hardened and with
the long track record, of, e.g. much older virtualization technology on
mainframes. So ... with more maturity and vigilance, VMs on Linux do
stand an excellent shot of establishing very good solid long secure
track record ... but ... not quite there ... yet.
Another Pi "solution" scenario. Colo and "bring your own box". With
customer provided equipment in colo, costs are to a large part about
space and power. Pi can be a way to very substantially cut down on that
space and power, and some colos support such, and at significantly
reduced pricing. And a Pi is "more than enough" for *some* workloads.
Certainly lacks some features/capabilities, but sometimes Pi is the
answer for certain cost/benefit tradeoff analysis results.
However, in many many cases, Pi (or similar) is *not* the answer. E.g.
with VMs, the virtual sizing can be pretty arbitrary - from very small,
to quite beefy. Can't do that with Pi. Some VM technologies also do
RAM deduplication. Got 200 VMs running on a large physical host? 83 of
'em running the exact same kernel? Don't really need 83 copies of that
same code in RAM for each VM, do we? So, some VMs can do that RAM
deduplication, thus further increasing capacity, efficiency, etc. Can't
do that with Pi. Don't think it's 100% there yet, with Linux, but some
other operating systems well support dynamically adding RAM and CPU to a
host, and likewise removing them - all while the host is running. I
think as such technology comes to and matures in Linux, that will also
help, and well play into VM technologies. At present with Linux VMs,
most of the dynamic scaling, typically involves adding and dropping VMs
as loads and such change ... and sometimes, though not so dynamic,
spinning up larger VMs, and later dropping back to smaller VMs. That
works fairly well for some types of workloads ... but there are others
that just want a bigger host, or small number of large hosts - even if
they're VMs. But ... off-peak, they may not need nearly so much
resource. So, ... OS images that can dynamically grow and shrink ... I
think in future that will significantly also benefit Linux ... but not
fully there yet.
And ... ye olde laptop with the video/GPU problems? I was thinkin' I
might mostly turn it into a sort'a "headless server". As I'd left the
BIOS setting, does have wake-on-LAN enabled ... but that doesn't cause
it to *boot* off the LAN - just wakes it up. Network 'n all that still
fine. And ... has more RAM (8 GiB) compared to "vicki" (only 2 GiB),
and also much quieter than "vicki" - the 1U machine with the small high
RPM fans. Anyway, *may* be feasible ... but the video is highly
challenging ... so ... sometimes I can boot something and see fine
what's going on 'n all ... other times I can't even see the hardware's
boot selection menu. :-/ So ... shall see where that goes. But, if I
can coerce it to reasonably do so, general idea is to have it be an
"on demand" headless server. It's not really good for all that much
beyond that - at least in terms of reliability, ... though I suppose to,
still has perfectly good optical burner in it - so could also use it for
that.
Anyway, soon on to installing my nice new 2TB SSD in the (relatively)
"new" (to me) laptop. I've been thinking on what to do with the various
drives, and have generally decided, I want the two SSDs in the "new"(er)
laptop, larger one as primary/"H"DD1, smaller (~150G) as secondary/HDD2.
I'll do mdraid (RAID-1) of at least the most/more critical stuff between
the two - with software RAID, I can conveniently pick and choose what is
mirrored and/or not - most hardware RAID is usually not nearly as
flexible (though hardware RAID does also offer *some* advantages -
notably if it's rock solid hardware RAID (flakey hardware RAIDs don't
count!), and one does matched drives in RAID-1 (or sometimes also
RAID-5, but generally RAID-1 for at least core OS stuff), they can be
basic dead simple and highly reliable - particularly also with proper
monitoring to detect any disk issues. And replacing failed drives in
such can also be dead simple - confirm failed drive, pop it out, put in
replacement, confirm that went in fine, confirm it remirrors all fine
again - that's basically it). Anyway, software RAID much more flexible
- a key advantages in many scenarios. But a bit more to configure, and
a wee bit more work to do recoveries on - certainly not complex, but a
bit more than as dead simple as hardware RAID where no actual software
commands need be run at all (but alas, one should run some to at least
make suitable confirmations). Anyway, mirror the most/more important
stuff, and ... "all that extra space" - particularly on the larger SSD -
that's "extra space" for ... "whatever". Thus far I've got in mind,
e.g. about 100G of ISO images (whole collection), convenient place to
stage backups before copying to other media ... some things like that.
Probably some less important VM "throw-away" (or nearly so) images.
Anyway, haven't fully planned it out, but idea is get the new SSD into
the "new" laptop as soon as feasible in least disruptive way (have
general game plan - still refining that) - then I can mostly take my
time to work out the details of placement, adding mirroring, doing at
least some reasonable testing of storage before adding chunks of it to
active use, etc. And the old ~700G spinning rust that came in the "new"
laptop? Moved that drive to the "old" laptop - gives it much more
storage than it used to have ... but at a performance hit ... but
performance definitely not at all critical for anything I might manage
to still get that old laptop to reasonably do. More later, but those
are the next steps on my hardware (mini-)adventure. Oh, also not a top
priority, but one thing the "new" laptop may also well be able to do -
potentially serve as a "portable install server". I did also, hair over
a week ago, pick up an 8 port Gig switch.
More information about the BALUG-Talk
mailing list