Quoting Michael Paoli (Michael.Paoli@cal.berkeley.edu):
Well, ... folks will tend to have their biases/preferences, and human nature 'n all that, they'll want to feel good about what they bought, especially if they put much resource into it ($$ and/or otherwise), so ... confirmation bias, etc. ... they'll often tend to advocate for what they chose/bought/did ... even when quite sub-optimal. :-/ Though too, some will want to go into the "horror stories" of what they did/tried/got.
Sure, though it was just a bit frustrating that people weren't focussing, and instead just promoted irrelevant things they were familiar with. It ended up being not just 'if you want the job done right, you need to do it yourself', but actually 'if you want the job even _attempted_, you'll need to do it yourself.'
Moving parts are noise - bad for home - especially "living quarters" thereof, and also most common failure points.
Back in the 1990s, linuxmafia.com ran on a spare workstation-class AMD K6 box (IIRC) in my apartment in SOMA, San Francisco -- and that was OK. You had the background whir of a large case fan, a smaller one in the PSU, and chattering of a pair of SCSI drives. Around 2000 when I moved to Menlo Park, I migrated it to an old VA Research model 500 (Pentium II) 2U pizza-box server that I kept in a niche in my living room -- and I gradually became aware that I was staying away from that part of my living room, as the noise was just that bothersome. Not to mention it making heat significantly worse on warm days: You suddenly remember there's a damned good reason for all of that HVAC in server rooms.
So, I learned my lesson about that, and in 2006 when I moved to my longtime family home a few blocks away in Menlo Park, all of that clatter and heat got moved entirely out of living space. But I also became fascinated by the potential for silent, low-power home servers inside the house, and not just little toy embedded devices. There's a whole niche of these called the HTPC market -- home theater PCs.
(Promo: If you want to hear about Kodi and MythTV on such Linux machines, come hear Rob Walker's talk this Wednesday evening at SVLUG in San Jose. www.svlug.org. It's going to be memorably interesting.)
E.g. still deal with lots of computers in data centers, and, what most commonly fails? The moving parts - hard drives and fans [...]
What's particularly pernicious is the cascade failure that often ensues starting with the fans. Fans wear out over time, but _also_ more often than not they are terrible fans to begin with, ones made using sleeve bearings rather than ball bearings. Sleeve bearings that fail, as they all do sooner rather than later, seize up the whole fan. You now have a motor that's struggling to move the fan, but unable to. Instead of convecting heat away, it's neither doing that nor moving away the additional heat it's generating. Instead of a fan, you now have a heater in your machine. Heat starts to rise, pretty soon it starts to stress and kill electronics.
The aforementioned PII pizza box, when I checked on it in the first Menlo Park location, had started to get its fans clogged with dust, which I then made a point of more-frequently cleaning (being mindful of static discharge). Dust can trigger the previously-described cascade failure early.
I got many years of additional life out of my main VA Linux Systems 2230 2U pizza-box PIII server, which replaced the VA Research model 500, through the 2000s by ripping out its small case fans and replacing them wtih aftermarket Antec fans: To my disappointment, it emerged that even my employer VA Linux Systems relied on terrible sleeve-bearing case fans -- doubtless because they were cheap, and in a competitive market where you are competing against companies like Dell, IBM, and HP that have deeper quantity parts discounts, you economise where you can.
But all those experiences reminded me that a typical server, workstation, or in many cases even laptop is only one seized fan away from cascade failure -- _and_ that all those fans and hard drives themselves add to the heat problem (not to mention power consumption) even while working perfectly. So, here's an idea: How about a design that doesn't need fans to begin with, and has no hard drives or other moving parts?
Hard drive vs. SSD? I don't think SSD has fully taken over for hard drives ... quite yet.
But for a home server, you can economically rely on just SSD for everything except collossal A/V libraries and such. And that's when you might add an external RAID drive array.
Doing that keeps the heat from the hard drives, and from any fans required for the hard drives, away from the main system -- among other advantages.
Newer Pis do have more RAM ... but it's still not all that much.
Raspberry Pi 3 model B at 1GB RAM is as good as you can get, for now. I can run an Internet server on that -- or half of that -- but I just can't do that plus hypervisor mode.
And then there are the other drawbacks.
I've a project in mind, want it to be a small, isolated, independent, and secure as feasible, at least for a basic home-based project. Pi is a good fit for it, as it's economical for that, low power, and highly independent. What I have in mind for that is actually a pair of Pis. One "front end", connected to The Internet, reasonably hardened as feasible, but that would mostly be a fairly simple web/API that would talk to a secured back-end.
IMO, the moment you say 'secure as feasible' and 'connected to the Internet', you have ruled out ARM (at least, as things stand). Reason: kernels.
Ever considered how kernels are produced for the Raspberry Pi platform? You should, because the devil's in the details, there.
There is generic support in the mainline kernel.org Linux kernel for ARM, _but_ to my knowledge it still doesn't run on real-world ARM devices without porting in the form of out-of-tree patchsets. As a person running the RPi in production, you are dependent on the state of that out-of-tree patchset. You cannot run straight kernel.org code on your RPi, let alone straight kernel.org code with, say, a grsecurity snapshot or similar or patches for your favourite hypervisor.
Don't get me wrong: Such a machine's admin has substantial protection from safety in numbers, the RPi being as popular as it is. But the fact is that, if there's an urgent kernel security issue, you may be screwed in the short term (and there are times when the short term matters). Eventually, the RPi's complicated kernel patchset will be updated and new binary kernels emerge, but until then you'll be wondering 'What's taking so long?' What's taking so long is the consequence of reliance on out-of-tree kernels.
Some day, all of this may get fixed and support for the most important and most common ARM devices mainlined. Also, some day 64-bit ARM (and thus larger memory address spaces) will be common rather than exotic. Today, though, if I want mainline kernels, modern RAM address spaces, avoidance of wierd one-off bootloaders, and mainline hypervisor support, that needs x86_64.
Oh, and there's the mass storage thing. RPi defaults to microSD for main storage. I'll assume that's adequate for speed and reliability (without having tested), but what do you do for redundancy? You have a pair of USB2 ports. People differ in their assessment of USB's adequacy for main (as opposed to casual) mass storage. I consider it too flakey. Other people disagree. (I wish them luck.)
IMO, one of the distinguishing traits of servers is that we have fallback plans to minimise chance of failure and keep downtime really low. Maybe your fallback plan for a failed or glitchy microSD card is a second one sitting on a shelf and an automatic rsync-over-ssh script to pull down the latest backup. Maybe it's an entire spare RPi sitting on the shelf with automatic rsync ditto. But lack of meaningful ability to do RAID1 is at minimum a significant weakness in a server.
Another Pi "solution" scenario. Colo and "bring your own box".
See above.
And ... ye olde laptop with the video/GPU problems? I was thinkin' I might mostly turn it into a sort'a "headless server".
Concur. As I like to say, a laptop so much wants to be a server that it even brings its own UPS along for the ride.
Typical laptops aren't really made to dissipate heat 24x7, though, They're designed for light and intermittant duty. If you're serious about server deployment, first check to see whether it has only one SATA header, or two. Either way, one option is to relocate storage to outside the laptop. Mount the hard drive or SSD in an external case, and run an eSATA cable to it from the laptop. The external case will need separate power from an AC wall wart that provides USB power. If there are two SATA headers on the laptop motherboard, you can do this with two drives, and run them RAID1 w/Linux md. (The IMO dumb way to accomplish the same thing is to use USB for the data path as well. I wouldn't, but I'll mention this for completeness's sake.)
Yes, this requires putting money into it, but not a lot, and the equation is: Spend a modest amount of money and effort, gain reliability. This is a good deal if your time and effort dealing with future problems has value. You have alleviated some SPoF concerns by giving each storage device its own heat-dissipating enclosure and PSU, and rendered the body of the laptop more reliable by removing a source of heat, improving airflow, and reducing stress on its PSU.
Of course, arguably if you are using one or more _SSD_ in the laptop instead of HDs, the concern about heat dissipation and power draw is quite small, which is true, and going solely internal can make a lot of sense.