Original Link: https://www.anandtech.com/show/1667
WinHEC 2005 - Keynote and Day 1
by Derek Wilson & Jarred Walton on April 26, 2005 4:00 AM EST- Posted in
- Trade Shows
Introduction
Landmarks events don't occur very often in life, and as advancements in technology come and go, the landmark evens come less frequently. Consider the area of transportation. For centuries, we relied on the brute force approach, while in the last 125 years we have gone from horse-drawn carriages to steam powered and gas powered engines, on to flight and eventually to space travel. The last major landmark is now more than 30 years in the past, and it's difficult to say what will happen next.We can see similar progress in the realm of computing, and narrowing things down a bit in the history of Microsoft. There have been a few major transitions over the past 25 or so years. 8-bit to 16-bit computing was a massive step forward, with address spaces increasing from 20-bit to 24-bit and a nearly complete rewrite of the entire Operating System. The transition from 16-bit to 32-bit came quite a bit later, and it happened in stages. First we got the hardware with the 386, and over the next five years we began to see software that utilized the added instructions and power that 32-bit computing provided. The benefits were all there, but it required quite a bit of effort to realize them, and it wasn't until the introduction of Windows NT and/or Windows 95 that we really saw the shift away from the old 16-bit model and into the new 32-bit world.
You can see the historical perspective in the following slide. Keep in mind that while there are many more releases in the modern era (post 1993), many of the releases are incremental upgrades. 95 to 98 to Me were not major events, and many people skipped the last version. Similarly, NT 4.0 to 2000 to XP while all useful upgrades didn't exactly shake the foundations of computing.
The next transition is now upon us. Of course we're talking about the official launch of Windows XP 64. It has taken a long time for us to really begin to reach the limits of 32-bit computing (at least on the desktop), and while the transition may not be as bumpy as the change from 16 to 32-bits, there will still be some differences and the accompanying transitional period. This year's WinHEC (Windows Hardware Engineering Conference) focused on the advancements that the new 64-bit OS will bring, as well as taking a look at other technologies that are due in the next couple of years.
Gates characterized the early 16-bit days of Windows as an exploration into what was useful. For many people, the initial question was often "why bother with a GUI?" After the first decade of Windows, the question was no longer present, as most people had come to accept the GUI as a common sense solution. The second decade of Windows was characterized by increases in productivity and the change to 32-bit platforms. Gates suggested that the third decade of Windows and the shift to 64-bits will bring the greatest increases in productivity as well as the most competition that we have yet seen.
What Took So Long?
One of the questions that many of your probably have is, what took so long? Opteron and later Athlon 64 have been available for quite some time - roughly two years now. AMD has talked of Windows 64 for that long and more, and only now are we finally seeing the fruits of MS' labor.The conspiracy theorists are undoubtedly going to talk about an alliance between MS and Intel. It's difficult to say for certain whether that played a role, but remember that Xeon with 64-bit capabilities has been available for nearly a year now. Microsoft stands to benefit - in terms of increased sales of its OS and applications - by the release of XP-64, and we would like to think that they have simply been spending the extra time to make sure the release is as smooth as possible.
One of the other key factors in the delays is the drivers. While MS has control over the source code and APIs for Windows, those are not the only critical parts of the OS. Drivers are an integral part of any OS, and proper optimizations as well as porting take a lot of time and effort. While XP-64 is capable of running 32-bit applications, the drivers must be native 64-bit code.
Whatever the cause of the delays, we feel relatively confident in stating that there wasn't any major conspiracy to hurt AMD or any other company. Microsoft has seen quite a lot of groups shift to Linux simply to gain earlier support for x86-64, and that can't be something they're happy about. In the end, getting a new OS release done well is more important than getting it done fast, and hopefully the release of XP-64 will be one of the less painful upgrades for early adopters.
One last item that we want to quickly point out: many people have also assumed that the launch of XP-64 and the embrace of the x86-64 architecture by Intel has somehow signified an end to Itanium and IA64. It was reiterated on several occasions that Itanium is not dead and it's not going anywhere. XP-64 will also have a version for the IA64 platform, and Itanium will continue to compete primarily with high-end servers like those from IBM with the POWER5 processors and Sun with their UltraSPARC processors. The chances of any home user running an Itanium system anytime soon are pretty remote, but the platform lives on.
How much memory is "enough"?
Bill Gates is often misquoted as having said something to the effect that "no one will ever need more than 640K of memory!" Happy to poke some fun at himself, Gates suggested that anyone that actually believed this legendary quote probably also thinks that Microsoft is working on an email tracking system.While the actual specifics of what was said may be lost to time and fading memories, the basic idea is that at some point, even unimaginable amounts of memory are likely to be exhausted. With the availability of 64-bit computing - and obviously XP-64 is not first to the party, although we'll leave discussions of Linux and other 64-bit OS solutions out for now - we now have the potential to address up to 2^64 bytes of memory (or 16 EiB of memory if you prefer). Gates quipped, "I'll be very careful not to say that 2 to the 64th will be enough memory for anyone. I will say that it might last us for a little while; it's quite a bit of memory, but some day somebody will write code that wants to go even beyond that."
In reality, our current x86-64 systems can't actually address that much memory - and with the largest readily available DIMMs currently coming in at 2 GB in size, it would require over two billion such DIMMs to provide 2^64 bytes of memory! For the present, hardware is limited to 40-bit or 48-bit physical address spaces, depending on implementation, which would still require hundreds or even thousands of 2 GB DIMMs to reach. As the hardware limits are approached, things can be modified to stretch the physical address space until it eventually reach 64-bits. When will this occur? Given that it took nearly two decades to go exceed the constraints of the 32-bit address space, 64-bits could very well last for several decades (at least on the desktop). But that's just speculation for now.
One of the areas that we really need to talk about is who specifically needs more than 32-bit memory spaces. While everyone stands to benefit in some ways from additional memory, the truth is that we will not all benefit equally. Servers and high-end workstations have already been available in 64-bit designs for a while, and they remain the primary examples of usage models that require more memory. You can see some examples of the server uses for 64 bit outlined above. A further example given was the MSN Instant Messenger servers. MS was able to reduce the number of servers - and thus the space required - while improving performance by shifting to a 64-bit environment.
On the desktop front, the vast majority of people aren't waiting with bated breath for Word 64 and Excel 64; instead, it's the content creation people that are working with large 3D models, movies and images that are beginning to run into the memory wall. 3D gaming may hit that wall next, although it may still be a couple more years. After conversations with several vendors following the keynote, we feel safe in stating that a major need for 64-bit Windows will only come if you're already running at least 2 GB of RAM. If you're not running that much memory, it doesn't necessarily mean you should avoid upgrading to XP-64, but you'll certainly get diminishing returns. On the other hand, if you're running 4 GB of RAM in your system and still running into memory limitations, 64-bit Windows has the potential to bring vast performance improvements.
The Benefits of XP-64
The ability to address more memory isn't the only change with XP-64. It may be the most immediately noticeable change, but there are other reasons for the switch. So what sorts of benefits will you see by the transition to XP-64? The following slide was provided as an outline of improvements.We've already discussed the larger memory support, but other performance improvements are still present. X86-64 (which we'll use as the generic term to encompass both AMD64 and EM64T) doubled the number of registers from eight in the 32-bit world to sixteen in the 64-bit world. Depending on the application, the additional registers could help a little or a lot. Anand is still working on benchmarks, and the change in OS has brought quite a few difficulties for our benchmarking setup, but chats with a few vendors suggest that the additional registers should bring on average somewhere around a 7% performance boost. Again, that's just a guess, but it's important to remember that hardware designers and compilers have been working around the idiosyncrasies of the x86 architecture for several decades now, and with techniques such as register renaming, additional registers aren't going to bring massive performance boosts in most scenarios.
Several of the bullets in the slide are essentially just filler material; we've heard many times about how the latest OS provides better productivity, a great platform for new software, and improved reliability. Sometimes we see those features, sometimes we don't; only time will tell. The enhanced layer of hardware protection is actually present, via the XD/NX (execute disable/no execute) bit that Intel and AMD provide in their latest processors. How successful that feature becomes - it's already enabled in XP SP2 - is still up for debate, however.
At this point in the keynote, Microsoft began to give some specific examples of the performance benefits that were possible with the shift to 64-bits. We remain somewhat skeptical in some instances, but there are definitely areas that stand to benefit. The first demonstration was of NewTek's LightWave 3D rendering software, comparing the 32-bit and 64-bit versions. We tried to take some shots, although they are blurry and dark at best.
The presenter, Jay Kenny, gave some example interactions and suggested that model complexities could be increased substantially with the 64-bit version. He stated that certain complex scenes that would have required 100 passes in the 32-bit version could be done in as few as 7 passes in the 64-bit version, although it appears that each pass took longer in the 64-bit version as the average speedup was claimed as around 3X. Another example given was the ability to work in more completely rendered environments rather than flat shading models, although we're not entirely sure what constraints prohibited the 32-bit version from using textures.
The next example given was of SQL Server 2005, once again running 32-bit and 64-bit versions on the respective OSes. The hardware was supposedly identical, but as you can see in the slides, the 32-bit system reports 1 GB RAM and Page File sizes while the 64-bit system reports 2 GB RAM and Page File sizes. Our impression was that there were some smoke and mirrors present, but we would still expect a large database to have substantial performance benefits by running in a 64-bit world. The demonstration showed CPU usage spiking to 100% rather rapidly in the 32-bit world, while the 64-bit world was able to handle 5X as many clients at a lower total CPU usage.
After the demonstrations, we were shown the above list of current 64-bit solutions being offered by Microsoft, as well as their plans moving forward. Not surprisingly, all of the solutions MS is focusing on are related to the server environment. MS is more dependent on 3rd party support for the desktop applications that can benefit from 64-bits, and now that XP-64 is officially released, we should see these products begin to show up at retail.
Beyond XP-64
XP-64 wasn't the only major topic covered in the keynote. The other major topic was the next generation of Windows, codenamed Longhorn. There were several demonstrations of Longhorn features and capabilities, although there really wasn't a whole lot of new content shown. Gates also showed off several of the new tablet PC devices, but again, there wasn't anything really amazing to report. Here are a couple of gratuitous shots of these devices before we move on to Longhorn.For those of you that know nothing about Longhorn, here are a few of the interesting tidbits. For starters, Longhorn will release simultaneously in 32-bit and 64-bit versions. In other words, 32-bit OS and system support will continue for quite a few more years, as Longhorn isn't scheduled to ship before the holiday 2006 time frame. Longhorn will also represent the most significant redesign of the Windows UI since the upgrade from Windows 3.1 to Windows 95 - or at least, that's what Microsoft is claiming.
The above represents the broad plans for Longhorn in terms of features and hardware. WinFX is one of the core changes, and it relates to the way all of the GUI functions will be handled. 3D hardware acceleration with pixel shaders, alpha blending (transparency) and other graphical effects - a cynic might suggest that many look like stuff we've seen in OS X - are going to be a major part of the new OS.
One of the interesting points is that the graphics in Longhorn will be vector based, allowing for zooming effects without the pixelization that we get with the current model. This extends even further to areas like the new Metro document format. Metro is a royalty free standard based off of XML that Microsoft has created for Longhorn. The idea is that it will allow better translation of documents between various formats.
To illustrate this, MS gave a brief demonstration of a prototype printer with hardware support for the Metro format, with an example of a PowerPoint slide printed using the current driver model as well as the new Metro model. Frankly, we're not sure what the point was with the demonstration, other than to show that some programs don't print properly - PowerPoint apparently being a culprit. Anyway, Metro sounds like a new take on the PostScript concept, only without royalties. Royalty-free isn't a bad idea, certainly, but this feels quite a bit like the old browser wars. Take something another company makes for free and clone it, then integrate it into the Windows OS and give it away for free. Consumers might like it, but competitors don't all benefit from the decisions.
The purpose of Longhorn as a whole continues to be improving the computing experience, with security, performance, reliability, and service all playing a role. In order to realize this goal, changes are coming with Longhorn. One of the major changes will be the new driver model and certification program. We'll have more to say on that later.
Windows Update will also be getting a renovation, with improved support. Gates suggested that in the future, users will all run WHQL certified drivers because there will be no need to use unsigned drivers. Hopefully, that will come by improving driver quality rather than by relaxing the requirements to gain certification.
More on Longhorn
With Longhorn on the horizon, one of the concerns invariably becomes preparing for the upcoming launch. No one wants to spend a lot of money on a new PC today that will become obsolete with the launch of a new OS.Thankfully, the requirements for Longhorn drivers are now finalized. That was one of the announcements of the first day, that the Longhorn Driver Model is now complete. Microsoft provided attendees with the appropriate information on CD, as well as the basic hardware requirements. Partners can already begin preparing for the launch with the "Longhorn Ready PC" program.
The roadmap for Longhorn is already well underway. Further information and details will be given to partners over the coming months, with the Beta 1 stage coming this summer followed later in the year by the Beta 2 stage. As we talked about with XP-64, MS is planning to take some time finalizing the code for Longhorn, and the tentative release date is the end of 2006 for the client platform. That leaves them nearly a year to work from the Beta 2 stage up through the RTM (Release To Manufacturing) stage. It's tough to come up with a conspiracy for why MS would intentionally delay Longhorn, and as with XP-64 we feel that the timeline is simply meant to give them the best chance of a smooth launch.
The above timeline for Longhorn is for the client version. The server version will follow, although it could trail by as much as a year. Between now and the launch of Longhorn Server we will see several updates to the Windows 2003 Server OS. We'll see how Microsoft does in terms of executing these plans, although "Holiday 2006" is still a rather vague on release date. Don't hold your breath....
Thoughts on the Longhorn Driver Model
The x64 launch aside, we have heard about some very interesting hardware refinements that Microsoft wants to see happen by the time we reach Longhorn availability. As Gates mentioned in his keynote, the Longhorn driver model is finished. This means that hardware vendors can begin making sure their hardware will run smoothly under Longhorn today. It also means that we are able to catch a glimpse of what things will be like when Longhorn finally comes along.One of the most interesting things to us is the Longhorn Display Driver Model (LDDM). Under the new display driver model, Microsoft wants to more closely integrate the graphics hardware with the operating system. In order to do this, a couple things are going to happen. First, graphics drivers will give up management of graphics memory to Windows. Windows will then handle the complete virtualization and management needs of graphics memory. This will have a large impact on the way graphics hardware vendors approach driver writing. In spite of the simplification windows memory management will bring to the graphics subsystem, different management techniques may lend themselves more readily to one hardware architecture or another. Right now, we are hearing that ATI and NVIDIA are both playing nice with Microsoft over what will happen when they lose the full control of their own RAM, but we will be sure to keep abreast of the situation.
In addition to the above issues with the LDDM (i.e. Windows sometimes has problems managing system RAM, and now they're going to manage VRAM?), Windows can't manage memory that is taken up by the video bios. There has been a push towards UEFI (Unified Extensible Firmware Interface) as a replacement to the archaic BIOS, and Microsoft would like to see the video BIOS become more entwined with the operating system. The plan right now is to make use of ACPI in order to facilitate this, but we may see even more advancements when we have UEFI hardware. Maintaining a higher level of integration on all fronts with the OS should help make more display options automatic and highly accessible.
Even if UEFI makes it into all hardware platforms by the end of 2006 and graphics hardware vendors take up the cause, we will still need to have legacy BIOS and VGA firmware support in all computers in order to run older (i.e. non-Longhorn) Operating Systems. AMD likens this transition to the current x86-64 transition. Hardware will support legacy and advanced functionality for some time until the user install base is such that legacy support can be dropped.
Hybrid Hard Drives
If new and improved firmware management doesn't strike your fancy, Microsoft is working together with Samsung and others to create a new breed of hard disk drive. You may have noticed a slide earlier mentioning the potential for "hybrid hard drives". The basic premise of the hybrid hard drive will be the inclusion of non-volatile flash ram inside the drive itself.Though it hasn't made a great deal of headway yet, Microsoft is really hoping this idea will take off. Gaining support for this technology will round out Longhorn's disk read caching scheme. The aggressive read caching Longhorn does is able to minimize disk accesses for reads by moving large amounts of data into RAM. Unfortunately, using system memory as a writeback cache is not a very feasible (or safe) option. In order to cache disk writes, fast, solid state, non-volatile RAM can be used.
Rather than shove this NVRAM on the motherboard and add a level of complexity to the rest of the system, Microsoft and others have determined that giving hard drive manufacturers full control over the use of NVRAM and caching will allow the rest of the system to operate as if nothing has changed. With the hybrid hard drive managing its own 64MB to 128MB writeback cache, the OS need not worry about what is going on internally with the drive.
The upside to all this is that normal usage models show that average users don't usually write more than 64MB every 10 minutes. This means that the average case may see zero writes hitting the hard drive for 10 minutes, and possibly more with the 128MB solution. Combined with the read cache built into Longhorn, we could see zero disk accesses for 10 to 15 minutes on a heavily utilized system once the OS and applications have initialized. This could mean very large thermal and power savings on notebook drives. Microsoft talked about dropping average drive power consumption over 50%.
In addition to keeping disks turned off on notebook applications, with no parts moving there is less chance for failure. This could help avoid issues that even today's accelerometers can't avoid. An added benefit is also faster resume from hibernation and quick boot time. The system is able to store boot data in the NVRAM. Upon startup, the BIOS is able to access this data without waiting at all for the hard drive to spin up. By the time the system is finished with NVRAM, the drive will be at full speed and ready to continue loading the OS.
There was some talk about hybrid HDDs improving MTBF (mean time before failure), but we will have to wait and talk to the disk manufacturers about this one. It seems counter intuitive that spinning up and powering down the disk more often will do enough to decrease wear on the drive as a whole. The heads will benefit, but how will the added wear on the spindle affect failure rates?
Other questions include the speed and cost of flash RAM proposed for inclusion in hybrid disks. Today's flash RAM is at least an order of magnitude slower than hard drive speeds. Microsoft says they expect 1 nand Flash to reach speeds nearing 100MB/s by the time hybrid disks see the market. They also expect this Flash RAM to be relatively cheap. We aren't so optimistic at this point, but you never know. It may seem enough of an advantage to a company like Samsung (who makes disks and Flash RAM) to really push costs to a point where hybrid hard drives are feasible.
We aren't quite sure we like the idea of windows aggressive approach to caching yet, but it seems to have worked well for OS X thus far. Only time will tell if Microsoft's approach is as good as Apple's.
Day One Conclusion
That takes care of the major talks of day one. There were other interesting things that we saw on the showroom floor, including working servers running the 1.72 billion transistor Monticeto processors (Itanium 2). We'll try to get some better shots of the setup tomorrow, but suffice it to say the die size is HUGE - even in comparison to the already rather large Smithfield die.So far, everyone is putting on a good front for the XP-64 launch. We'll try to see how it really stacks up in our own analysis soon. Stay tuned.