Ah, and appears the operation has been quite successful ... the "patient" got a body transplant, and is doing much better now. Essentially brain of patient (~150 GB SSD) was successfully transplanted out of ailing body (Zareason Strata 6880) into apparently good healthy donor body (Dell P10E001 Precision M6600). Also completed successful live migration of the balug VM back from physical host "vicki" ... which nominally resides under the brain of the patient, but was live-migrated away to "vicki" while the patient's brain was in suspended animation for a while during the patient's operation. More later, but thus far all looking quite well. :-) Also, before the operation, the donor body was well prepped, with a BIOS upgrade - notably covering some critical Intel vulnerabilities, and had been reasonably tested to apparently be suitably healthy donor. Some slight adjustments to /etc/udev/rules.d/70-persistent-net.rules were made to squash any transplant rejection symptoms, and all is looking quite well.
From: "Michael Paoli" Michael.Paoli@cal.berkeley.edu To: "Rick Moen" rick@linuxmafia.com Cc: BALUG-Admin balug-admin@lists.balug.org Subject: Re: [BALUG-Admin] (forw) [Bug 1707447] Re: Roster should not lowercase addresses Date: Thu, 26 Oct 2017 22:34:36 -0700
In the meantime ... I've also got some hardware issues pestering me :-/ ... my primary personal laptop ... alas, my last response I got from Zareason was essentially it's "too old" for them to get parts (and it's not *all* that old) ... and also one significant part of the (potential) repair ... well, if part can no longer be had (GPU/video integrated onto mainboard) - that's effectively irreparable - even if part was available, it's not economically feasible. The *first* mainboard failed with GPU failure not long after the 1 year warranty (at least I think it was before the 2 year mark). I'm not sinkin' that kind'a money into it yet again if that manufacturer's (it's an Intel OEM laptop) hardware - or at least for that particular mainboard and compoenents thereupon, is that unreliable. At the *moment* the video is working, ... but often when the laptop is rebooted or powered up ... the video doesn't work (can only barely/marginally make out / guess what's on the screen when it's flaking out ... and not OS or the like - the problem even applies to BIOS screens). So ... that issue bit me again this morning, and took until many reboots this evening before the video is displaying again properly ... for now. Not to mention ... keyboard ... both "N" and "O" keys failed ... and that's 2nd keyboard on the laptop ... using external USB keyboard on it now. But I did pick up for excellent price (free, "miracle of the free curbside giveaway") a Dell P10E ... which as far as I've seen thus far (still need to check it out further) appears to be fine, and is a good approximate replacement (even better in several ways - includes eSATA, 4 USB ports (1 more than existing laptop), supports two internal regular laptop SATA drives (spinning rust and/or SSD) ... anyway, I'll probably cover more of that else-thread (and/or else-list) in fairly near future ... but if that hardware checks out good, I'm exceedingly inclined to move over to that laptop. But I digress (ah, topic drift) ...
Quoting Michael Paoli (Michael.Paoli@cal.berkeley.edu):
Ah, and appears the operation has been quite successful ...
Good job!
Also completed successful live migration of the balug VM back from physical host "vicki" ... which nominally resides under the brain of the patient, but was live-migrated away to "vicki" while the patient's brain was in suspended animation for a while during the patient's operation.
See, this is one of the reasons I am trying to move production systems into VMs: It means you can snapshot systems and migrated them between hardware-disparate physical hosts as necessary.
Starting years ago, I repeatedly tried to have a discussion on the SVLUG mailing list about hardware selection and system design for _home servers_ that took advantage of recent advances, and it was a bust because all participants did was advocate I buy whatever they'd been buying but ignoring both the home-server role's requirements and the recent advances.
I suggested a server inside my home should ideally have no moving parts (no fans, SSDs instead of hard drives) so as to not be noisy, and be low-power and low-heat: They suggested units with lots of fans and hard drives. I suggested it needed to do RAID1 on real, i.e., better than USB, mass-storage interfaces: They suggested Raspberry Pis. I suggested it needed to be x86_64 to avoid requiring out-of-tree kernels (security risk if there's a kernel exploit requiring an updated kernel _now_) and special-snowflake bootloaders: They suggested ARM.
And, the one that really disappointed me: I suggested it'd be cool to run at least a couple of VMs, production host alongside beta, and that for a production full-service Internet host I really cannot do it in less than 512 MB RAM for a production host because of SMTP antispam: My friend Joey Hess, one of the key people in the Debian Project at the time, responded saying he uses a BeagleBone Black, a nice small ARM-based server he uses -- that maxes out at 512 MB RAM total.
I pointed out that this just didn't support the VM-based architecture I thought was the future way forward (unless there were some way in the 2010s to do Web serving, SMTP, mailing lists, DNS nameservice, ssh, and the usual server tasks in radically less RAM), and all people could say was 'Shift tactics and divide processes up among a farm of tiny computers.' Eh, no, I don't think a data centre of Raspberry Pis is a reasonable alternative. So, I've had to mostly drive iterations of that discussion myself, first on the SVLUG list and more recently on CABAL's.
Anyway, this is why I think small, silent or quiet, x86_64-based machines able to run at least 8GB RAM are coo and the proper way forward: primarily because hypervisors solve a bunch of problems including live-migration, and decoupling what you run from the specific hardware chipsets you run it on.
But this weekend's motherboard purchase is actually for a family car, FWIW, because the current one went Pfft! https://store.allcomputerresources.com/20chptcrpcme10.html
Ca-ching!
Man, my dad, who before he became a Pan American World Airways captain sold and worked on 'computers' when they were made by Marchant (https://en.wikipedia.org/wiki/Marchant_calculator) in Oakland would hardly recognise the current world, where you end up having to reboot your oven, and where your automatic transmission misbehaves because the car's motherboard has burned itself out.
From: "Rick Moen" rick@linuxmafia.com To: balug-talk@lists.balug.org Subject: Re: [BALUG-Talk] operation appears to have been a success: hardware issues Date: Sun, 29 Oct 2017 10:16:21 -0700
Quoting Michael Paoli (Michael.Paoli@cal.berkeley.edu):
Ah, and appears the operation has been quite successful ...
Good job!
Thanks. :-)
Also completed successful live migration of the balug VM back from physical host "vicki" ... which nominally resides under the brain of the patient, but was live-migrated away to "vicki" while the patient's brain was in suspended animation for a while during the patient's operation.
See, this is one of the reasons I am trying to move production systems into VMs: It means you can snapshot systems and migrated them between hardware-disparate physical hosts as necessary.
Yup, helluva lot of advantages to VMs. Not *the* answer to *everything*, but in many cases, the best answer/solution, or among the best.
Starting years ago, I repeatedly tried to have a discussion on the SVLUG mailing list about hardware selection and system design for _home servers_ that took advantage of recent advances, and it was a bust because all participants did was advocate I buy whatever they'd been buying but ignoring both the home-server role's requirements and the recent advances.
Well, ... folks will tend to have their biases/preferences, and human nature 'n all that, they'll want to feel good about what they bought, especially if they put much resource into it ($$ and/or otherwise), so ... confirmation bias, etc. ... they'll often tend to advocate for what they chose/bought/did ... even when quite sub-optimal. :-/ Though too, some will want to go into the "horror stories" of what they did/tried/got.
I suggested a server inside my home should ideally have no moving parts (no fans, SSDs instead of hard drives) so as to not be noisy, and be low-power and low-heat: They suggested units with lots of fans and hard drives. I suggested it needed to do RAID1 on real, i.e., better than USB, mass-storage interfaces: They suggested Raspberry Pis. I suggested it needed to be x86_64 to avoid requiring out-of-tree kernels (security risk if there's a kernel exploit requiring an updated kernel _now_) and special-snowflake bootloaders: They suggested ARM.
Well, ,.. "it depends" ... a bit, anyway. But yes, what you suggest/advocate, darn good solution - and among the best - if not *the* best, for most typical home installations. Moving parts are noise - bad for home - especially "living quarters" thereof, and also most common failure points. I recall some years back, dealt with supporting a fairly large number of installed PCs in retail environments - about 50% of the failures were power supply - typically because the fan failed or slowed too much, then the heat killed the power supply. Next in line, probably about 30% or somewhat more, was failed hard drives. The other hardware failures were a bit miscellaneous, but moving parts failures were or caused about 80% of the hardware failures. That's still relatively true today. E.g. still deal with lots of computers in data centers, and, what most commonly fails? The moving parts - hard drives and fans ... though as hard drives are increasingly being replaced by SSD, hard drive failures are becoming less common as their count goes down. The other annoying failures I see too commonly on at least some enterprise equipment - batteries. E.g. I've seen many series of systems, where hard drives are hot swappable, there's RAID controller with battery backed cache, and, over time, the batteries fail at higher rates than the hard drives(!), and the batteries are *not* hot swappable ... ugh. But ... at least the battery failures "only" cause a performance hit 'till they're replaced ... but for some systems/workloads, that's a big deal - even critical. And yes, good to keep the power requirements reasonably down. Most home systems don't need to be sucking that much power most of the time. Most home users ought not be seriously mining Bitcoin at home. 8-O If they really want to do that, it's gonna burn a lot of power, and make a lot of noise - and also have significantly different hardware requirements and optimizations. And, for home, if it really is as server, mostly just used via network and such, and if there's suitable place to tuck it away where noise isn't really an issue, then sometimes system with fan(s), hard drives, etc. may be fine - for that particular installation. But even then, still has the moving parts failure downsides, etc.
ARM? Well, "it depends". ;-) But yeah, for the most part, the further one gets from x86 (and these days x86_64/amd64) on Linux kernels, the more one moves towards the fringes on support ... which is not best place to be. So for now, and probably long time to come, best support (and quickest fixes for issues), with Linux, will almost always be x86_64/amd64, with i486 (very) close behind, and everything else, trailing along sometime after ... and at varying speeds/levels - depending how far out on the fringe.
Hard drive vs. SSD? I don't think SSD has fully taken over for hard drives ... quite yet. But the days(/years?) remaining for hard drives are numbered. In terms of cost per unit storage, wasn't all that many years ago, SSD was about 10x as expensive as hard drive. Now it's closer to 5x ... expect that trend to continue. Once SSD gets to right around as inexpensive as spinning rust, the hard drive will go the way of the floppy and other obsoleted technologies.
And, the one that really disappointed me: I suggested it'd be cool to run at least a couple of VMs, production host alongside beta, and that for a production full-service Internet host I really cannot do it in less than 512 MB RAM for a production host because of SMTP antispam: My friend Joey Hess, one of the key people in the Debian Project at the time, responded saying he uses a BeagleBone Black, a nice small ARM-based server he uses -- that maxes out at 512 MB RAM total.
Yeah, the "small" systems, such as Pi - good for *some* things, but quite limited. A lot of things just don't work well - or at all - with such constricted RAM. Newer Pis do have more RAM ... but it's still not all that much.
I pointed out that this just didn't support the VM-based architecture I thought was the future way forward (unless there were some way in the 2010s to do Web serving, SMTP, mailing lists, DNS nameservice, ssh, and the usual server tasks in radically less RAM), and all people could say was 'Shift tactics and divide processes up among a farm of tiny computers.' Eh, no, I don't think a data centre of Raspberry Pis is a reasonable alternative. So, I've had to mostly drive iterations of that discussion myself, first on the SVLUG list and more recently on CABAL's.
Yes, VMs are lovely. :-) Not a panacea, but the "right" answer/solution to a whole lot 'o problems/challenges.
Anyway, this is why I think small, silent or quiet, x86_64-based machines able to run at least 8GB RAM are coo and the proper way forward: primarily because hypervisors solve a bunch of problems including live-migration, and decoupling what you run from the specific hardware chipsets you run it on.
Yup, ... *the* best answer a whole lot 'o the time.
But this weekend's motherboard purchase is actually for a family car, FWIW, because the current one went Pfft! https://store.allcomputerresources.com/20chptcrpcme10.html
Ca-ching!
Egad. Not all that long ago, I sold a "perfectly good" running car (no major mechanical issues, basically just some minor issues, ran fine, good shape, etc.) ... for ... yeah, sold it for less than twice the cost of that car computer. I know some folks, quite advocate for modern cars, to buy a spare computer/controller for the car - at least for some/many vehicles. Notably for many, when that thing dies, car is basically dead in the water, and one is quite stuck if one doesn't have a spare.
Man, my dad, who before he became a Pan American World Airways captain sold and worked on 'computers' when they were made by Marchant (https://en.wikipedia.org/wiki/Marchant_calculator) in Oakland would hardly recognise the current world, where you end up having to reboot your oven, and where your automatic transmission misbehaves because the car's motherboard has burned itself out.
Oye, ... technology sure does march along! First computer my dad ever worked on ... magnetic drum storage. First computer I ever programmed, mass storage for backing up and loading programs - cassette tape. 8-O My first x86 computer ... floppies (360/1200 & 1440 KiB) and a hard drive (~150M). MicroSDHC is up to ~300G ... so that's about a million times the storage of 5.25" DD floppy, shrunk down to the size of about my pinky fingernail. Not to mention the increase in speed, reliability, and no moving parts. And drives? Yesterday I bought a 2T 2.5" SSD. :-) More than 10,000 times the capacity of my first hard drive, and much smaller size, power, and much greater speed.
So ... some other miscellaneous bits ... quite pleased with my Dell find from earlier this year. Just took a peek on Ebay - looks like fair market value of that used hardware is probably roughly around $300+- some fair bit depending on configuration, particular model, condition, etc. If a laptop still holds that much value at >~=3 years old, that's sayin' somethin' right there. Anyway, so far, quite favorably impressed. And darn nice to have integrated 3 buttons for the pointer device ... no more of that middle button emulation with the two button press ... ugh. And stick-point (seems every manufacturer calls it something different ... trademarks?) pointer thingy - good to also have that - always rather missed being without, as I'd used such laptops with that type of pointer device, before trackpads started showing up on laptops. Oh, ... and quiet? Pretty darn quiet, ... sometimes - actually much of the time, dead silent quiet. Yes, it has fans. But ... with SSD (thus far just the ~150G one from older laptop), and pretty good thermal design (lots of heat piping and such - it's designed to be able to dissipate quite a bit of power ... power supply is bit over 200W! ... brick of a thing) ... anyway, with my more typical loads *much* lighter than that - much of the time the thing isn't even running either of the fans. Contrast that with the laptop I had been using, and the fan was always going, 100% of the time, anytime the laptop was powered up. I think also on Dell, larger size, metal casing - all also help with heat distribution and dissipation.
And ... to play devil's advocate ... well, just a wee bit ;-) Raspberry Pi? Well, ... "it depends". *Sometimes* the Pi is a best solution/fit. I'll give a couple scenarios. I've a project in mind, want it to be a small, isolated, independent, and secure as feasible, at least for a basic home-based project. Pi is a good fit for it, as it's economical for that, low power, and highly independent. What I have in mind for that is actually a pair of Pis. One "front end", connected to The Internet, reasonably hardened as feasible, but that would mostly be a fairly simple web/API that would talk to a secured back-end. And ... the secured back-end? Another Pi. And the communication between them? Serial only - simple interface and protocol - no PPP or anything like that, just a simple login/interact quite restricted and hardened API - so that'd be the secured back-end. Now, ... could that be done with virtual machines? Yes, ... but ... as secure and isolated as that? No. Virtual machine technology on Linux, mostly all fine and wonderful, but, also kind'a the relatively shiny new kid on the block. There have been security issues, ... may likely still be some - at least on occasion - for some while to come. It's just not as secure and hardened and with the long track record, of, e.g. much older virtualization technology on mainframes. So ... with more maturity and vigilance, VMs on Linux do stand an excellent shot of establishing very good solid long secure track record ... but ... not quite there ... yet.
Another Pi "solution" scenario. Colo and "bring your own box". With customer provided equipment in colo, costs are to a large part about space and power. Pi can be a way to very substantially cut down on that space and power, and some colos support such, and at significantly reduced pricing. And a Pi is "more than enough" for *some* workloads. Certainly lacks some features/capabilities, but sometimes Pi is the answer for certain cost/benefit tradeoff analysis results.
However, in many many cases, Pi (or similar) is *not* the answer. E.g. with VMs, the virtual sizing can be pretty arbitrary - from very small, to quite beefy. Can't do that with Pi. Some VM technologies also do RAM deduplication. Got 200 VMs running on a large physical host? 83 of 'em running the exact same kernel? Don't really need 83 copies of that same code in RAM for each VM, do we? So, some VMs can do that RAM deduplication, thus further increasing capacity, efficiency, etc. Can't do that with Pi. Don't think it's 100% there yet, with Linux, but some other operating systems well support dynamically adding RAM and CPU to a host, and likewise removing them - all while the host is running. I think as such technology comes to and matures in Linux, that will also help, and well play into VM technologies. At present with Linux VMs, most of the dynamic scaling, typically involves adding and dropping VMs as loads and such change ... and sometimes, though not so dynamic, spinning up larger VMs, and later dropping back to smaller VMs. That works fairly well for some types of workloads ... but there are others that just want a bigger host, or small number of large hosts - even if they're VMs. But ... off-peak, they may not need nearly so much resource. So, ... OS images that can dynamically grow and shrink ... I think in future that will significantly also benefit Linux ... but not fully there yet.
And ... ye olde laptop with the video/GPU problems? I was thinkin' I might mostly turn it into a sort'a "headless server". As I'd left the BIOS setting, does have wake-on-LAN enabled ... but that doesn't cause it to *boot* off the LAN - just wakes it up. Network 'n all that still fine. And ... has more RAM (8 GiB) compared to "vicki" (only 2 GiB), and also much quieter than "vicki" - the 1U machine with the small high RPM fans. Anyway, *may* be feasible ... but the video is highly challenging ... so ... sometimes I can boot something and see fine what's going on 'n all ... other times I can't even see the hardware's boot selection menu. :-/ So ... shall see where that goes. But, if I can coerce it to reasonably do so, general idea is to have it be an "on demand" headless server. It's not really good for all that much beyond that - at least in terms of reliability, ... though I suppose to, still has perfectly good optical burner in it - so could also use it for that.
Anyway, soon on to installing my nice new 2TB SSD in the (relatively) "new" (to me) laptop. I've been thinking on what to do with the various drives, and have generally decided, I want the two SSDs in the "new"(er) laptop, larger one as primary/"H"DD1, smaller (~150G) as secondary/HDD2. I'll do mdraid (RAID-1) of at least the most/more critical stuff between the two - with software RAID, I can conveniently pick and choose what is mirrored and/or not - most hardware RAID is usually not nearly as flexible (though hardware RAID does also offer *some* advantages - notably if it's rock solid hardware RAID (flakey hardware RAIDs don't count!), and one does matched drives in RAID-1 (or sometimes also RAID-5, but generally RAID-1 for at least core OS stuff), they can be basic dead simple and highly reliable - particularly also with proper monitoring to detect any disk issues. And replacing failed drives in such can also be dead simple - confirm failed drive, pop it out, put in replacement, confirm that went in fine, confirm it remirrors all fine again - that's basically it). Anyway, software RAID much more flexible - a key advantages in many scenarios. But a bit more to configure, and a wee bit more work to do recoveries on - certainly not complex, but a bit more than as dead simple as hardware RAID where no actual software commands need be run at all (but alas, one should run some to at least make suitable confirmations). Anyway, mirror the most/more important stuff, and ... "all that extra space" - particularly on the larger SSD - that's "extra space" for ... "whatever". Thus far I've got in mind, e.g. about 100G of ISO images (whole collection), convenient place to stage backups before copying to other media ... some things like that. Probably some less important VM "throw-away" (or nearly so) images. Anyway, haven't fully planned it out, but idea is get the new SSD into the "new" laptop as soon as feasible in least disruptive way (have general game plan - still refining that) - then I can mostly take my time to work out the details of placement, adding mirroring, doing at least some reasonable testing of storage before adding chunks of it to active use, etc. And the old ~700G spinning rust that came in the "new" laptop? Moved that drive to the "old" laptop - gives it much more storage than it used to have ... but at a performance hit ... but performance definitely not at all critical for anything I might manage to still get that old laptop to reasonably do. More later, but those are the next steps on my hardware (mini-)adventure. Oh, also not a top priority, but one thing the "new" laptop may also well be able to do - potentially serve as a "portable install server". I did also, hair over a week ago, pick up an 8 port Gig switch.
Quoting Michael Paoli (Michael.Paoli@cal.berkeley.edu):
Well, ... folks will tend to have their biases/preferences, and human nature 'n all that, they'll want to feel good about what they bought, especially if they put much resource into it ($$ and/or otherwise), so ... confirmation bias, etc. ... they'll often tend to advocate for what they chose/bought/did ... even when quite sub-optimal. :-/ Though too, some will want to go into the "horror stories" of what they did/tried/got.
Sure, though it was just a bit frustrating that people weren't focussing, and instead just promoted irrelevant things they were familiar with. It ended up being not just 'if you want the job done right, you need to do it yourself', but actually 'if you want the job even _attempted_, you'll need to do it yourself.'
Moving parts are noise - bad for home - especially "living quarters" thereof, and also most common failure points.
Back in the 1990s, linuxmafia.com ran on a spare workstation-class AMD K6 box (IIRC) in my apartment in SOMA, San Francisco -- and that was OK. You had the background whir of a large case fan, a smaller one in the PSU, and chattering of a pair of SCSI drives. Around 2000 when I moved to Menlo Park, I migrated it to an old VA Research model 500 (Pentium II) 2U pizza-box server that I kept in a niche in my living room -- and I gradually became aware that I was staying away from that part of my living room, as the noise was just that bothersome. Not to mention it making heat significantly worse on warm days: You suddenly remember there's a damned good reason for all of that HVAC in server rooms.
So, I learned my lesson about that, and in 2006 when I moved to my longtime family home a few blocks away in Menlo Park, all of that clatter and heat got moved entirely out of living space. But I also became fascinated by the potential for silent, low-power home servers inside the house, and not just little toy embedded devices. There's a whole niche of these called the HTPC market -- home theater PCs.
(Promo: If you want to hear about Kodi and MythTV on such Linux machines, come hear Rob Walker's talk this Wednesday evening at SVLUG in San Jose. www.svlug.org. It's going to be memorably interesting.)
E.g. still deal with lots of computers in data centers, and, what most commonly fails? The moving parts - hard drives and fans [...]
What's particularly pernicious is the cascade failure that often ensues starting with the fans. Fans wear out over time, but _also_ more often than not they are terrible fans to begin with, ones made using sleeve bearings rather than ball bearings. Sleeve bearings that fail, as they all do sooner rather than later, seize up the whole fan. You now have a motor that's struggling to move the fan, but unable to. Instead of convecting heat away, it's neither doing that nor moving away the additional heat it's generating. Instead of a fan, you now have a heater in your machine. Heat starts to rise, pretty soon it starts to stress and kill electronics.
The aforementioned PII pizza box, when I checked on it in the first Menlo Park location, had started to get its fans clogged with dust, which I then made a point of more-frequently cleaning (being mindful of static discharge). Dust can trigger the previously-described cascade failure early.
I got many years of additional life out of my main VA Linux Systems 2230 2U pizza-box PIII server, which replaced the VA Research model 500, through the 2000s by ripping out its small case fans and replacing them wtih aftermarket Antec fans: To my disappointment, it emerged that even my employer VA Linux Systems relied on terrible sleeve-bearing case fans -- doubtless because they were cheap, and in a competitive market where you are competing against companies like Dell, IBM, and HP that have deeper quantity parts discounts, you economise where you can.
But all those experiences reminded me that a typical server, workstation, or in many cases even laptop is only one seized fan away from cascade failure -- _and_ that all those fans and hard drives themselves add to the heat problem (not to mention power consumption) even while working perfectly. So, here's an idea: How about a design that doesn't need fans to begin with, and has no hard drives or other moving parts?
Hard drive vs. SSD? I don't think SSD has fully taken over for hard drives ... quite yet.
But for a home server, you can economically rely on just SSD for everything except collossal A/V libraries and such. And that's when you might add an external RAID drive array.
Doing that keeps the heat from the hard drives, and from any fans required for the hard drives, away from the main system -- among other advantages.
Newer Pis do have more RAM ... but it's still not all that much.
Raspberry Pi 3 model B at 1GB RAM is as good as you can get, for now. I can run an Internet server on that -- or half of that -- but I just can't do that plus hypervisor mode.
And then there are the other drawbacks.
I've a project in mind, want it to be a small, isolated, independent, and secure as feasible, at least for a basic home-based project. Pi is a good fit for it, as it's economical for that, low power, and highly independent. What I have in mind for that is actually a pair of Pis. One "front end", connected to The Internet, reasonably hardened as feasible, but that would mostly be a fairly simple web/API that would talk to a secured back-end.
IMO, the moment you say 'secure as feasible' and 'connected to the Internet', you have ruled out ARM (at least, as things stand). Reason: kernels.
Ever considered how kernels are produced for the Raspberry Pi platform? You should, because the devil's in the details, there.
There is generic support in the mainline kernel.org Linux kernel for ARM, _but_ to my knowledge it still doesn't run on real-world ARM devices without porting in the form of out-of-tree patchsets. As a person running the RPi in production, you are dependent on the state of that out-of-tree patchset. You cannot run straight kernel.org code on your RPi, let alone straight kernel.org code with, say, a grsecurity snapshot or similar or patches for your favourite hypervisor.
Don't get me wrong: Such a machine's admin has substantial protection from safety in numbers, the RPi being as popular as it is. But the fact is that, if there's an urgent kernel security issue, you may be screwed in the short term (and there are times when the short term matters). Eventually, the RPi's complicated kernel patchset will be updated and new binary kernels emerge, but until then you'll be wondering 'What's taking so long?' What's taking so long is the consequence of reliance on out-of-tree kernels.
Some day, all of this may get fixed and support for the most important and most common ARM devices mainlined. Also, some day 64-bit ARM (and thus larger memory address spaces) will be common rather than exotic. Today, though, if I want mainline kernels, modern RAM address spaces, avoidance of wierd one-off bootloaders, and mainline hypervisor support, that needs x86_64.
Oh, and there's the mass storage thing. RPi defaults to microSD for main storage. I'll assume that's adequate for speed and reliability (without having tested), but what do you do for redundancy? You have a pair of USB2 ports. People differ in their assessment of USB's adequacy for main (as opposed to casual) mass storage. I consider it too flakey. Other people disagree. (I wish them luck.)
IMO, one of the distinguishing traits of servers is that we have fallback plans to minimise chance of failure and keep downtime really low. Maybe your fallback plan for a failed or glitchy microSD card is a second one sitting on a shelf and an automatic rsync-over-ssh script to pull down the latest backup. Maybe it's an entire spare RPi sitting on the shelf with automatic rsync ditto. But lack of meaningful ability to do RAID1 is at minimum a significant weakness in a server.
Another Pi "solution" scenario. Colo and "bring your own box".
See above.
And ... ye olde laptop with the video/GPU problems? I was thinkin' I might mostly turn it into a sort'a "headless server".
Concur. As I like to say, a laptop so much wants to be a server that it even brings its own UPS along for the ride.
Typical laptops aren't really made to dissipate heat 24x7, though, They're designed for light and intermittant duty. If you're serious about server deployment, first check to see whether it has only one SATA header, or two. Either way, one option is to relocate storage to outside the laptop. Mount the hard drive or SSD in an external case, and run an eSATA cable to it from the laptop. The external case will need separate power from an AC wall wart that provides USB power. If there are two SATA headers on the laptop motherboard, you can do this with two drives, and run them RAID1 w/Linux md. (The IMO dumb way to accomplish the same thing is to use USB for the data path as well. I wouldn't, but I'll mention this for completeness's sake.)
Yes, this requires putting money into it, but not a lot, and the equation is: Spend a modest amount of money and effort, gain reliability. This is a good deal if your time and effort dealing with future problems has value. You have alleviated some SPoF concerns by giving each storage device its own heat-dissipating enclosure and PSU, and rendered the body of the laptop more reliable by removing a source of heat, improving airflow, and reducing stress on its PSU.
Of course, arguably if you are using one or more _SSD_ in the laptop instead of HDs, the concern about heat dissipation and power draw is quite small, which is true, and going solely internal can make a lot of sense.
Forgot to mention:
The aforementioned PII pizza box, when I checked on it in the first Menlo Park location, had started to get its fans clogged with dust, which I then made a point of more-frequently cleaning (being mindful of static discharge). Dust can trigger the previously-described cascade failure early.
Some dust components are also to a greater or lesser degree electrically conductive, which contributes to electronics becoming unreliable before (also, then) bringing about early failure. Your typical house, if it's like mine, is a pretty miserable failure at dust elimination compared to a server room, so introducing to it a delicate electronic device with a high-speed fan that blows air _and dust_ into and through the electronics is an accident waiting to happen. (Servers are more at risk than are workstations and laptops because of airflow and air speed through them.) But notice how this entire risk factor pretty nearly vanishes if there are no fans.
Sure, though it was just a bit frustrating that people weren't focussing, and instead just promoted irrelevant things they were familiar with.
Also, it never fails: Whenever people just don't understand a market for goods or servies, by default all they _do_ understand is price, so what they say and do reflects that mindset. In particular, they always gravitate to whatever's _cheapest_ (for myopic and self-defeating values of 'cheapest'; see also inkjet printers, QIC tapes).