“Physicalization” is an awkward name for an approach to server consolidation that seeks to offer a hardware-based alternative to virtualization by cramming multiple, low-power processors into a small amount of rack space. These processors are invariably mobile processors, designed for power-sensitive mobile products, and server vendors are building very small, modular server nodes around them and packing them as densely as possible into rack units.
It’s too early to tell if this trend has legs, but ever since we’ve started covering it, something about it has been bothersome: in short, it seems to go against Moore’s Law. But more on that in a moment.
We asked the IT professionals in The Server Room what they thought of physicalization vs. regular multicore + virtualization, and the results were pretty informative. In short, the answer from most seems to be either “use the right tool for the job,” or “what’s the point of physicalization, again?” In terms of wattage and cost, virtualization comes out ahead for most workloads.
Nobody was really willing to take up the cause of physicalization and make a case for it, and we don’t want to do that, here. But we do want to offer some thoughts on why more than one vendor has latched onto this idea, despite the fact that it’s at best an unorthodox way to make use of Moore’s Law.
Moore’s law and integration
One way of looking at Moore’s Law is as a statement about the speed at which die-level integration becomes cheaper than board-level integration for ever larger quantities of transistors. As transistors get smaller, you can cram more of them into the same amount of die space at the same cost per square millimeter (and a dramatically lower cost per transistor). So one of the ineluctable results of Moore’s Law is that, at any given process node, there is a fairly sizable range of transistor counts over which it’s much cheaper to fabricate all of those transistors on a single die (as opposed to fabbing them on separate dies and using board-level integration).
Given the reality above, it should always be cheaper to integrate, say, four identical processor cores on a single die than it would be to divide them among multiple sockets and use board- and system-level integration to gang them together. In other words, a quad-core system should always theoretically be cheaper than a comparable pair of dual-core systems, at least in terms of price-per-core.
Of course, all of this theorizing discounts a number of realities that affect system cost, like the cost of hard disks and RAM, for instance. But in the final accounting, it still seems that if you’re going to buy, say, 32 cores worth of x86 processing power, then you’re better off in terms of both cost and wattage with eight quad-core sockets than you are with 32 single-core sockets, no matter how low-power the single-core parts are individually. But if this is true, then why would any vendor use board-level integration to produce a “physicalized” server solution? The likely answer has to do with how processor vendors like Intel price their products.
It’s all about the margins
For any given processor and chipmaker, the higher the core-count, the higher-end the CPU. The higher-end the CPU, the fatter the margin. So quad-core Intel Xeon commands a significant premium in terms of margins versus an Atom processor, because it sits atop the performance pyramid and is aimed at customers who are willing to pay more to be the fastest. This difference in margin width, though, provides an opening that physicalization vendors are seeking to exploit.
Because (low-margin) mobile parts are priced lower than (high-margin) processors with higher core counts, physicalization vendors can afford to use costly board- and system-level integration to gang them together and sell them at a lower cost-per-core. So for anyone who, for whatever reason, is looking to buy the largest number of x86 cores for the lowest price, Intel’s margin structure makes it cheaper for them to buy those cores individually (from either Intel or a rival low-power x86 vendor) and integrate them the old-fashioned way, rather than to buy them on as few dies as possible.
In conclusion, Moore’s Law overwhelmingly favors die-level integration, and in theory this should give the price advantage to multicore products. But the real-world pricing structure of x86 vendors in the multicore era makes it cheaper to buy cores individually. The rationale for physicalization, then, is that it exploits this margin difference in order to pit an x86 vendor’s low-end parts against its high-end parts, in spite of the fact that the vendor’s entire pricing structure depends on the idea that these parts don’t compete with one another.
In short, hardware or software? The answer is it depends.