Teaching (and attending) classes virtually has many advantages, yet occasionally there are quirks. Some of my students recently asked for a longer break in our session than I had suggested because they had only two elevators in their building. Just as you could get into traffic jams in the elevator, on the freeway, or on the way past your friendly neighborhood farm, so too could your servers run into traffic jams – several kinds actually.
I’m not going to focus on networking, disk transfer rates, or other such peripheral sorts of traffic jams, but at the heart of computing – in the CPUs, the processor cores. A question I am often asked is “Why do we need 64-bit servers?” Don’t worry, the answer here won’t be too terribly technical.
With Microsoft operating systems and services, there has been a trend brewing. Exchange Server 2007 is available for x64 servers in production, and only available for evaluation purposes on x86 (32-bit) architectures. Exchange 2010 is x64 only, which should come as no surprise. Windows Server 2008 can be hosted by x86 or x64 hardware, yet Windows Server 2008 R2 runs only on x64 (and Itanium/EPIC (IA-64), another 64-bit architecture) but not x86. Various SQL Server 2005 and 2008 editions are available for x86, x64, and IA-64.
On the client side, Windows 7 is still available for x86 as well as x64. But for computers in both the client and server roles, is 64-bit computing really necessary or desirable? To a degree, the question is nearly moot for mainstream computing, because although lovely 8-bit and 16-bit systems still exist in embedded devices, mainstream computing is being swept along by a tidal wave. Just as desktops, portables, and servers mostly migrated from 16-bit to 32-bit processors years ago, the move to 32-bit to 64-bit systems is considered by many people to be inevitable.
But is it better? That’s debatable. It really depends on what you’re doing. To better answer the question, here are some fundamental questions you could ask about the software you’re running.
- Are the integers used in your software within the range ±4 billion? In other words, are you using 32-bit integers? Or do you need a greater range of values?
- Do you use floating point (real) numbers extensively which could benefit from faster 64-bit and 128-bit processing?
- How are character strings processed? Are you using Unicode characters?
- Do you need to address more than 4 billion bytes of memory per process? Actually, within a 4GB virtual memory space, only 2GB or 3GB may be available to a given process.
- Do you need to address far more than 4GB, perhaps more than 16GB of physical memory (RAM)?
From an end-user or even from an administrator or systems engineer perspective, the answers to these questions may not be entirely obvious. Nor should this list of questions be construed as anything near complete with respect to criteria for ascertaining 32-bit versus 64-bit tradeoffs. Certainly, we have specified no scientific algorithm or heuristic herein. The point here is that the answers to these questions are contributing factors yet the real assessment is fairly complex.
Much software runs perfectly fine on 32-bit systems. If you’re doing high-end audio processing, or certainly processing video, or hosting large databases, then 64-bit systems can have a marked, measurable capability and performance advantage over 32-bit systems. Before you start asking when 128-bit systems will become mainstream, you should be carefully planning your migration path to 64-bit workstations and servers if you haven’t already gone through that evolution process.
With respect to Windows, Mac OS X, Linux, Solaris, and other operating systems, the choice is being made based on what the masses need. If your organization doesn’t even have or need an elevator, that doesn’t mean you can or ought to stick with 32-bit computing forever.