Before the thread becomes cluttered with people suggesting alternatives or questioning why you wouldn't just run <insert manufacturer, OS, etc>, the person that did this replied the following:
simbimbo says:
December 9, 2012 at 11:03 am
Thanks for the great write up Hack A Day. I would like to answer some of the questions posted. @Geebles these machines all run SSD’s and I ordered them with AppleCare, so I hope to never have to change a drive ;-)
As for the reason I built this.. Well, I guess I’m just like a challenge ;-), but seriously, the company I work for has a need to have large numbers of machines to build and test the software we make.
There were plenty of discussions of Virtual environments and other “Bare Motherboard”/Google Datacenter-type solutions, but the fact is, the Apple EULA requires that Mac OS X run on Apple Hardware, since we are a software company we adhere to these rules without exception. These Mac Machines all run OS X in a NetBooted environment. We require Mac OS X because the products we make support Windows, Linux and Mac so we have data centers with thousands of machines configured with all 3 OS’s running constant build and test operations 24 hours a day 365 days a year.
As for device failure, we treat these machines like pixels in a very large display, if a few fail, it’s ok, the management software disables them until we can switch them out. This approach allows us to continue our operations regardless of machine failures.
@bitbass I tried the vertical approach, but manufacturing the required plenum to keep the air clean to the rear machines cost too much for this project, but it’s not off the table for the next rack
@Kris Lee When I open the door I can literally watch the machine temps go up, but I can keep it open for 15-20 minutes before the core temps reach 180F
I believe Mozilla has large racks of mac Minis for testing Firefox on. (Same issue, I believe -- to support multiple platforms on the same hardware, you need something that's legal to run OSX on.)
Someone didn't finish his homework. The legal, supported method of virtualizing OSX is:
Guest: Leopard Server
Host: ESX 5.1
Hardware: Mac Pro
Mac Pro is supported on the VMware HCL.
Leopard Server is legally virtualizable on Mac hardware.
VMware supports OSX Leopard as a guest on ESX 5.1
Mac Pro towers are going to be less dense, but given his cooling situation, lower density is probably a win. What datacenter wants 8 kW of laptop CPUs stuffed into a rack? Virtualization would also overcome the lack of redundant PSUs.
You misinterpreted the sentence. They were saying that since the drives were bought with applecare, if one of them dies, he does not need to change the drive himself. Applecare to the rescue.
Quoting the statement and captioning it with "lol" is not contributing, at least not on HN.
It might have been contributing if you included the codinghorror link up front.
In response to said link, I will observe that it was written 1.5 years ago, and SSDs have been progressing very rapidly in both performance and reliability.
You ignored the last clause of the sentence. The author doesn't plan to not replace drives because they are SSDs, rather because they are covered by AppleCare.
You're right that that does make the assertion somewhat less ridiculous. But it still sounds like he's saying that he expects the SSDs to be more reliable than regular drives.
What he sounds like he's saying and what he is saying are very different things, and his reply sounds pretty different to me than to you, I think. The comment he's replying to is:
Looks amazing! Some real professionalism there! However replacing a hard drive is going to be a pain! Wonder if there is a way to modify the mac mini to allow a more accessible hard drive access?
So I read his reply as separate statements: a) They're all applecare covered, and non-vital so he won't need to muck around with maintaining the individual mini's, and b) They're running SSD's not HDD's.
You could have avoided the hail of downvotes if you had of included your conclusion that you read the applecare part but still understood it as he was saying the SSD's were more reliable. I don't think I've ever seen a comment of just 'lol' escape downvotes on HN, it's pretty clearly not good HN etiquette.
Me too! Before we used fruitstrap[0] we used DeviceAnywhere. This was a 2U rack unit that contained a franken-iPhone. The hardware buttons were soldered up to GPIO on the DeviceAnywhere.
For those who showed interest in learning more about our infrastructure/automated testing process, if you could drop me an email? gp at our corp domain fb.com.
I don't work on the team anymore, but I can probably start off a thread with the right people involved from Facebook's side.
It's very cool, but is anyone actually tied to OS X as a server platform? Couldn't they move to FreeBSD and save a ton of money in an application like this? I'm wondering if there's a real business case for this, or it's just a fun hack.
edit: I guess lumped into this is the small market that seems to exist for colocated Mac Minis. Is there something about them that is better than renting commodity x64 hardware?
> anyone actually tied to OS X as a server platform?
Yes, if you make software that runs on OS X (or iOS and you want to test on iOS Simulators) you need OS X machines for your build and test process. You need lots of machines so you and your fellow developers can run speed things up and run tests in parallel.
It's a niche use case, but Apple's Qmaster software facilitates distributed video rendering and exporting for Final Cut Pro and some of their other pro applications. A server cluster like this would probably be overkill, but when you're working with massive video files, extra (OS X) machines to do the heavy lifting make a huge difference.
One of my old employers has legacy systems built with WebObjects. In theory, WebObjects is Java and should run on anything, but typically it is on OS X Server. I left just as one of my colleagues was trying to figure out what to do about the death of the XServe.
I am currently using a cluster of Mac Minis as a server platform. Some benefits:
* Launchd is a massive improvement over the equivalent mess on Linux. This can't be understated if you are managing your own hardware.
* You can develop on the same machine you are deploying to.
* You have exactly the same toolchain as on Linux.
* Lots of remote monitoring options that are unique to OSX e.g. OSX Server
* The OS is stable and upgrades are safe enough to enable auto update. I could never do that on CentOS.
But really it comes down to hardware and resale value for me. 2 Mac Minis in 1RU is great value.
Most modern Linux distros use systemd now, which is a pretty good unified platform for starting services.
I don't quite understand the other points; you can certainly develop on Linux and there are lots of mature monitoring options.
The trick with auto-updates is, you're getting everything from the package manager. This forces you to keep everything up to date, not just some notion of the 'core system', which is what you're getting from Mac update. Of course, you have to have some discipline and stage major updates in a testing area before you actually update production servers, but I think you'll find key packages like OpenSSL should probably be kept up-to-date.
Personally, I maintain a system which consists of about 30 CentOS servers; not large by any means. But things like upgrades and monitoring are a non-issue; we test updates before we apply them, and we use Nagios for monitoring.
As someone downthread pointed out, you can get a SuperMicro 1U server with a hell of a lot more horsepower than a Mac Mini for an equivalent price; the motivation for Mac servers seems to be almost exclusively to test and build Mac-specific software. I'd be curious to know what your specific application is that motivates this?
I don't have extensive experience with Launchd, but what makes it better?
OS X doesn't have the same toolchain as Linux - notably, Apple doesn't ship anything licensed under GPLv3. This leads to OS X versions of common GNU tools being extremely outdated.
I do agree with the point about developing and deploying in the same environment. I do both in Linux for that exact reason; far fewer surprises when your deploy environment matches what you've already debugged.
Everything. It is all in one place, very simple to use, has process monitoring like Monit and well supported.
Nobody uses what Apple ships by default. Homebrew or MacPorts and I have exactly the same setup as I have on my Linux server. Plus lots of amazing GUI wrappers which can simplify setup and ongoing maintenance.
Homebrew has some pre-built binaries too. From the FAQ: "Homebrew does provide pre-compiled versions for some formula that take a long time to compile (such as Qt which can take many hours to build). These pre-compiled versions are referred to as bottles and are available at: http://sf.net/projects/machomebrew/files.
1. Name your package manager, and I can list the missing packages, or missing flags which have made my life a pain at some point in my career.
2. Yup, and the only way, should you need to set any flags.
3. True. Which is why it's nice that you can mount dmg's and install pkg's via the command line.
How is remote management for these systems? I'm a big Mac fan, but for server infrastructure I've often wondered if Software Update and related things could be more trouble than it's worth.
Mac Minis are horrible server hardware. We've had a couple running as servers. They fail randomly. Their hard drives fail. They don't rack mount easily. The only reason to have them is if you inherit some old ones, don't want to throw them away, and then don't mind replacing and throwing failed units away pretty often.
Yap. I really depends on the type of loading. In other words I didn't mean "servers" as in necessarily web servers just machines that handle server loads. In this case it just means constants high CPU and disk churn. So if this is a high contention test rack, running tests non-stop its hardware will experience a server type level of stress.
They do but we found that mac minis do so more often. I have a sample of about 15 out of say 100+ other regular 1U rack mountable machines. Mac minis failed at a much higher rate.
The worst part is that some failures were not a 'stop dead failure' where it was easy to detect and replace. Some would just freeze intermittently, some just got really slow.
One could tell that there is a practical difference between server grade and consumer grade components. In retrospect time spent debugging and messing with this was probably not worth the gain in density and the cost.
These were 2009-2011 mac minis (the previous generation). Hard drive failures and complete freezes were the major problems we saw. Freezes were random but clustered on the same set of machines. Over time they got worse. We never found what it was exactly (except the hard drives), and of course, replacing components is not a quick easy job when it comes to Apple.
I'm actually surprised any DC would take that equipment. They, in my experience at least, are very fussy about what what you put in the racks and power draw etc.
Oh and we get 640 cores in 20U (8x4 core xeon machines each 1u) and that leaves enough room for a 32Tb SAN, FC switches and a pair of redundant LAN switches.
REgarding splitting the power using the hack described, 160 melted minis and a halon cloud coming up.
REgarding splitting the power using the hack described, 160 melted minis and a halon cloud coming up.
You should have a talk with your power cord provider. You should be using cables that can handle at least 4 amps in anything with a 110v plug on it. I don't think you can buy one smaller than 18ga and those are good for 10 amps. Remember, you have to handle enough current to blow the breaker if something goes wrong (unless you are British and have your own fuse in the plug).
We went through several designs with the vendor. This cable is designed to operate at the circuit rating. The wire from the PDU plug to the split is 12GA 600V and the cable from the split to the mini is 14GA 300V.
Most rackspace rented (in UK DC's at least) tend to be a maximum of 16A (at 240V) per 42U cabinet, so just under 4kW. By my estimation those Mac Minis will be drawing ~13kW at peak.
You can get higher power density in a rack in pretty much any UK DC, you just need to pay for it.
The problem is you often end up paying for the rack space that would be allocated for the total power you're using, without actually getting the rack space!
In exchange, the DC's own electrician turns up during your install and makes sure things are working correctly and safely, you might end up with different sockets if required, 3-phase power, etc.
Sorry about the wording of my sentence. It was supposed to imply that I built a prototype cable at home to show to the cable vendors who would have to build them. Also, there was plenty of discussion about the potential of these cables to be misused, the vendor manufactured the cables to handle 15A. I have corrected my Wordpress page to reflect this discussion.
Out of curiosity, what's the problem with that? I assume he'd have the copper twisted together to make a good connection, with the solder just holding it in place instead of acting as the conductor.
Household wiring must always use screw fittings, wire nuts, etc. Solder is not "generally approved" on dc, either, unless inside an approved housing, and then only when done in certain ways.* Household wiring is all the construction codes usually bother to cover, and they apply to 120/240Vac up to the switch, socket or outlet. Lamps, stereos, etc. are covered by code in some areas, like LA county, but by "Underwriter Labs Approval" in most others, in the US. A "UL" label (o/e) on the power cord is required for sale in most larger US communities.
120VAC can be attached by solder to an approved type of glass or phenolic circuit board inside the correct kind of enclosure. UL generally will not approve its use just about anywhere else for power-line connections.
This is something to keep in mind when doing any DIY project. Get a copy of the UL construction-code book. You'll find that all connections should be inside a non-meltable (fire-retardant) enclosure, via screw terminals, wire nuts, or compression fittings of some kind. Heat and humidity can quickly cause electrolytic corrosion on any soldered joint, eventually causing enough resistance for it to get hot and melt down.
Solder is great for some low-voltage dc purposes. When a connection might get subjected to heat (as in a house fire, resistive connection, etc.) the mischief that running solder can cause is just too great for it to be a good idea, so codes do not approve it in most areas. [Sorta like teflon tape in high-pressure gas lines. :-)] Over time, electrolysis slowly destroys soldered copper connections that aren't hermetically sealed, too. Some fungi accelerate that.
> Household wiring must always use screw fittings, wire nuts, etc.
This is for reasons of mechanical strength. Naive soldered connections can be pulled apart with your bare hands, and will not stand up to being snagged, tripped over, used as an acrobatic toy by a toddler, etc. This is especially true for the single-strand cable used in building wiring, which can apply a lot of leverage to a soldered joint.
Electrolysis and fungi? No, solder is chemically very stable. It's used to great effect to join copper water pipes, where the large surface area and built-in strain reliefs allow solder to be strong enough.
depending on the resistivity of the copper wire versus the lead-tin mix in the solder, there may end up being more current going through the solder than the actual wire.
Even if the resistivity values of the copper and solder are similar (I'm pretty sure they're at least in the same order of magnitude), you'll still end up with the same amount of current running through the copper wire and the solder.
You don't know what you're talking about. Copper is very conductive compared to solder.
Pure copper has a resistivity of 0.0172µΩ⋅m while 63% tin/37% lead solder has a resistivity of 0.145µΩ⋅m. (http://alasir.com/reference/solder_alloys/) That's almost an order of magnitude difference.
Let's say you had the absurd case of 1/2 copper and 1/2 solder. I = I_cu + I_solder, and V = I_cu * R_cu = I_solder * R_solder => 8x more current going through the copper than the solder.
You would need 8x more solder than copper in order to get "more current going through the solder than the actual wire."
Well hey, I guess I hit it on the head with them being at least within an order of magnitude, eh?
You forgot to take into consideration the thermal conductivity of solder. Although there's only 1/8th the current going through the solder than that of the wire, solder has a much much lower thermal conductivity than copper. After a bit of googling, copper's thermal conductivity is 401W/(mK) and bismuth solder is 19W/(mK). Although there's less current going through that solder, it's so much more thermally conductive it'll probably see its temperature increasing faster than that of the copper.
That certainly doesn't help when solder's melting point tends to be around 140C and copper's is 1000C. Also, while you may have said a 50/50 mix of solder and copper is extreme, if the OP was an awful at soldering and was using something like 16 gauge wire, it's not so ridiculous that there may be a 50/50 mix of solder to wire, hell that might even be a bit low.
The numbers you yourself reported show that copper is much more thermally conductive than solder, and not the other way around.
Thermal conductivity describes how quickly the heat conducts through the wire. Since the wire is uniformly heated, this is a minor detail (assuming that the wire's thermal conductivity is considerably higher than the electrical insulation around the wire). Instead, you need to look for the heat transfer coefficient of insulated wire.
The melting point of Sn63Pb37 is 183C, not 140C. 183 is not "around 140."
If the wire were hot enough to melt the solder in a copper+solder combination then it would be well more than hot enough to melt the plastic insulation around just the copper wire itself, which is typically rated for only 90C.
Um, what? You'll dissipate more heat at 110v than 220v (high school physics -- heat dissipation is proportional to resistance and the square of current, so for a given wattage, it is inversely proportional to voltage — hence high voltage power transmission).
In any event, 220v is no big deal. It's household current almost everywhere except the US. (A few countries use 110V, Australia is supposedly 240v.)
The heat dissipated of the two metals would be identical if their resistances were the same, of course. I'm not even considering the difference in amperage between a 110V and 220V connection, as I'm pretty sure that's not what the parent parent post was in reference to.
I was under the impression that the issue at hand was that using solder for high-load wires was bad because of solder's extremely low melting point. I meant to point out that even though the solder is just holding the wires together, there is still a current running through it, and if the resistivity of the solder is lower than that of the copper, the chances of the solder melting would be quite a bit higher.
I honestly don't know if that's why the parent parent post mentioned that soldering high load wires are bad. I just extrapolated issues that someone might have when using solder on high load wires, and relayed them in my reply.
Yeah i was puzzled by the post too, but solder is no way going to be lower resistance than copper, and if your wiring is heating up to the melting point of solder you have other issues.
> Even if the resistivity values of the copper and solder are similar (I'm pretty sure they're at least in the same order of magnitude), you'll still end up with the same amount of current running through the copper wire and the solder.
And what's the problem with that? The resistance of the solder will be minimal (as you say yourself, pretty much equivalent to copper), so the voltage drop and heat production should both be low? Am I missing something?
I mean, I'm pretty sure there's solder connecting the copper wire to the PCB of any PSU in my house, all running at 230V.
Considering you only really have to pay around the 60 dollar mark for the OS now, I dont think its much of a big deal, I use one of these at home as a mini fileserver/wiki it draws sweet FA makes little to no noise and has HDMI connector direct into my tv. I would happily deploy one for our company marketing team or small scale offices.
I understand the idea of treating them like pixels, so if a fan dies or a NIC card dies, no problem, just stop using that Mini. But what about memory corruption or other issues that are more difficult to detect? Normally server hardware has things like ECC memory to prevent these issues, but in this case a Mini with bad RAM could intermittently corrupt data for some time before it's noticed (if ever).
The machines are for testing. They'll detect those through secondary means. If a machine's faulty, it'll cause two cases: (1) faulty software will register as faulty; (2) good software will register as faulty. The third case (faulty software marked as good), is really unlikely, and any time it does happen, a later bug report will give a hint.
A test failure will probably bring up an engineer that will track down the issue, and a re-test will inevitably occur. The faulty machine will eventually (hopefully) get labeled flaky and will get repaired.
Of course, nobody may care and just use a double-test to verify that an executable is good.
> A test failure will probably bring up an engineer that will track down the issue, and a re-test will inevitably occur. The faulty machine will eventually (hopefully) get labeled flaky and will get repaired.
Depending on how valuable the engineers' time is. I have seen this played out like this: hardware gets blamed last after hours and days of testing have been wasted. So tests are run and re-run, blame goes all around until finally after hours and hours of testing it is determined that maybe it is hardware after all.
In the end an engineers' time is worth a lot more than savings obtained by running flaky but cheaper hardware.
Interestingly, it looks like the front fans blow _into_ the rack. This means that if the door isn't securely closed it'll blow open - being on hinges and having massive fans attached.
It would be better to have the fans on the back and suck air through the rack rather.
That said, DC floor space is cheap compared to power and cooling. I'm surprised they didn't lower the density so as not to have a massive fire risk.
I looked into some rack cooling options, but was unable to find a solution that would provide the amount and wide coverage of airflow I needed to move air slowly and uniformly through the rack to provide effective cooling. ( I was designing this to be used with the active cooling rear doors, so I couldn't overwhelm the door with too much air or it wouldn't cool the air effectively, and would raise the ambient temp of the room). So the fans move a high volume of air through the entire cabinet (including the corners) at low velocity resulting in very effective cooling of each of the 40 shelves.
The fans are large so they move a high volume of air at a low speed, the door doesn't move is left unlatched.
Also, can you please explain your "Massive Fire risk" comment? All of the hardware installed in this rack is UL certified and all of the machines will simply shut down if they get too hot.
Rack "foorprints" at the datacenter are expensive, and you pay for 15kw per footprint whether you use it or not. it just made sense to fit it all into one. Having had this rack running at full power for a couple of weeks now, I can say the temps stay lower than our SuperMicro racks in the same row!! and the Supermicro racks can only hold 20 machines before they run out of power in the footprint.
Curious - what do we call a computer like this? It's obviously not going to make the TOP500, but is it a "supercomputer"? I thought perhaps "minisupercomputer" might be fitting, but according to Wikipedia that is a term for a class of computers that became obsolete in the early 90s.
I'm a proud owner of a Mini Server (slightly customised - replaced memory and primary disk with SSD) for over a year. I use it as my main workstation and I love it; So small (and relatively cheap including the upgrade), yet so powerful.
Definitely a fun challenge. If you're going to invest in the hardware and custom build. Forget the y cable. Figure out a better solution. Rent 1/2 rack next to it to hold the pdus. +1 on the massive door fans.
Actually it's far cheaper than even an equivalent Supermicro solution let alone HP/Dell etc. You are getting at minimum 2 Mac Minis in 1RU which as of today could be a 8 core Core i7 / Dual SSDs / 16GB RAM.
Plus if you want to upgrade them then you can put them on eBay and get 75% of the original cost back. Try doing that with a server.
Mac mini is hyper threaded so appears in the OS as 8-cores. The OSX Server edition starts at $999. Replace the two hard drives with two SSD ($200) and add 16GB RAM ($100). Now show me a Supermicro that will have two nodes in 1RU for equivalent price.
Then remind me how much it is going to sell in a year when I decide to upgrade. I guarantee that the Mac Minis would sell in a day on eBay.
Hyper-threading doesn't mean you magically get extra 4 cores. It's a technology that works well for _some_ workloads, and for those it achieves up to 30% performance boost, per Intel claim (http://en.wikipedia.org/wiki/Hyper-threading). So, at best you could treat it as approximately 5-core. This may not change the original argument, but mac mini is not an 8-core computer. A true dual 4-core CPU server will have significantly more CPU capacity.
Hyperthreading works by duplicating just enough circuitry to allow use of CPU resources for a thread while another is stalled on a load, nothing more. No "5 cores" about it, it's simply a hardware assisted scheduling trick.
This is exactly what we did. I bought a mini, replaced the hd with a 128gb SSD and then maxed the ram out at 16gb and installed CentOS on it. The machine works great and if we start getting a lot of traffic, I'll just get another one and move postgres off onto that.
At some point if the need arises, I'd love to look into getting a thunderbolt nas for the db storage. That would be fun.
Unless I misread something that is a single node. The key point about the Mac Minis is you can have TWO nodes in 1RU. And given that the data centre costs will be more than the hardware it makes the overall proposition very compelling.
If you look at the picture carefully, it's clearly 4 minis in 1RU. The problem isn't that. The power draw and heat generated will be ridiculous. You'll pay big $$$ to keep them running 24/7.
Sorry, that makes no sense. For around $1200 you can buy an 1RU supermicro with 6-core i7, 32G Ram etc. that will smoke your mini ($1500, 4 cores) on every metric. If your rackspace is precious you could fit two of them into one RU with a half-sized case or by leaving out the case altogether (not much more esoteric than stacking mac mini's imho).
I love the mini (I have one under my desk!) but suggesting apple consumer gear (including the apple tax) would be a cheaper alternative to generic x86 consumer-gear is a little nuts.
Well, "good" is a relative term when you use a non-ECC board for a server. ;)
I just summed up the component prices for similar servers that we have built - that's the ballpark if you are willing to assemble it yourself (takes around 45mins when you're not doing it the first time).
I can get a MicroCloud chassis for $500 per server (x8), with the rest of the hardware working out to around another $700-800 per node (x8 again). Slightly lower density, but not too bad.
simbimbo says: December 9, 2012 at 11:03 am Thanks for the great write up Hack A Day. I would like to answer some of the questions posted. @Geebles these machines all run SSD’s and I ordered them with AppleCare, so I hope to never have to change a drive ;-)
As for the reason I built this.. Well, I guess I’m just like a challenge ;-), but seriously, the company I work for has a need to have large numbers of machines to build and test the software we make.
There were plenty of discussions of Virtual environments and other “Bare Motherboard”/Google Datacenter-type solutions, but the fact is, the Apple EULA requires that Mac OS X run on Apple Hardware, since we are a software company we adhere to these rules without exception. These Mac Machines all run OS X in a NetBooted environment. We require Mac OS X because the products we make support Windows, Linux and Mac so we have data centers with thousands of machines configured with all 3 OS’s running constant build and test operations 24 hours a day 365 days a year.
As for device failure, we treat these machines like pixels in a very large display, if a few fail, it’s ok, the management software disables them until we can switch them out. This approach allows us to continue our operations regardless of machine failures.
@bitbass I tried the vertical approach, but manufacturing the required plenum to keep the air clean to the rear machines cost too much for this project, but it’s not off the table for the next rack
@Kris Lee When I open the door I can literally watch the machine temps go up, but I can keep it open for 15-20 minutes before the core temps reach 180F
@Adam Ahhh.. Nope, you can’t have my job ;-)