Introduction

Historically, mobile CPUs were designed as derivatives of their desktop counterparts. You'd usually cut down on the cache, lower the clock speed and voltage, and maybe tweak the package a bit, and you'd have your mobile CPU. For years, this process of trimming the fat off of desktop (and sometimes server) CPUs to make mobile versions was the industry norm - but then Timna came along.

Timna was supposed to be Intel's highly integrated CPU to be used in sub-$600 PCs, which were unheard of at the time. Timna featured an on-die memory controller (RDRAM however), integrated North Bridge and integrated graphics core. The Timna design was very power-optimized and very cost-optimized. In fact, a lot of the advancements developed by the Timna team were later put into use in other Intel CPUs simply because they were better and cheaper ways of doing things (e.g. some CPU packaging enhancements used in the Pentium 4 were originally developed for Timna). What set Timna apart from Intel's other processors was that it was designed in Israel by a team completely separate from those who handled the desktop Pentium 4 designs. Intel wanted a fresh approach for Timna, and that's exactly what they did get. Unfortunately, after the chip was completed, the market looked bleak for a sub-$600 computer and the chip was scrapped, and the team was reassigned to a new project a month later.

The new project was yet another "out-of-the-box" project called Banias. The idea behind Banias was to design a mobile processor from the ground up; instead of taking a higher end CPU and doing your best to cut down its power usage, you started with a low power consumption target and then built the best CPU that you could from there. With a chip on their shoulder (no pun intended) and a bone to pick with Intel management, the former Timna team did the best that they could on this new chip - and the results were impressive.

Banias, later called the Pentium M, proved to not only be an extremely powerful mobile CPU, but was also one of Intel's most on-time projects - missing the team's target deadline by less than 5 days. For a multi-year project, being off by 5 days is nothing short of impressive - and so was the CPU's architecture. While many will call the Pentium M a Pentium 3 and 4 hybrid, it is far from it. Intel knew that the Pentium 4 wasn't a low-power architecture. The Pentium 4's trace cache, double-pumped ALUs, extremely long pipeline and resulting high frequency operation were horrendous for low power mobile systems. So, as a basis for a mobile chip, the Pentium 4 was out of the question. Instead, Intel borrowed the execution core of the Pentium III; far from the most powerful execution core, but a good starting point for the Pentium M. Remember that the Pentium III's execution core was partly at fault for AMD's early successes with the Athlon, so performance-wise, Intel would have their work cut out for them.

Taking the Pentium III's execution units, Intel went to town on the Pentium M architecture. They implemented an extremely low power, but very large L2 cache - initially at 1MB and later growing to 2MB in the 90nm Pentium M. The large L2 cache plays a very important role in the Pentium M architecture, as it highlights a very bold design decision - to keep the Pentium M pipeline filled at all costs. In order to reach higher frequencies, Intel had to lengthen the pipeline of the Pentium M from that of the Pentium III. The problem with a lengthened pipeline is that any bubbles in the pipe (wasted cycles) are wasted power, and the more of them you have, the more power you're wasting. So Intel outfitted the Pentium M with a very large, very low latency L2 cache to keep that pipeline full. Think of it like placing a really big supermarket right next to your home instead of having a smaller one next to your home or a large one 10 miles away - there are obvious tradeoffs, but if your goal is to remain efficient, the choice is clear.

A large and low latency L2 cache isn't enough, however. Intel also equipped the Pentium M with a fairly sophisticated (at the time) branch prediction unit. With each mispredicted branch, you end up with a large number of wasted clock cycles and that translates into wasted power - so beef up the branch predictor and make sure that you hardly ever mispredict anything in the name of power.

The next thing to tackle was chip layout. Normally, CPUs are designed to exploit the fastest possible circuits within the microprocessor, but in the eyes of the power conscious, any circuit that could run faster than what it needed was wasting power. So, the Pentium M became the first Intel CPU designed with a clock speed wall in mind. Intel would have to rely on their manufacturing to ramp up clock speed from one generation to the next. This is why it took the move from 130nm down to 90nm for the Pentium M to hit 2.0GHz even though it launched at 1.6GHz.

There were other advancements made to the core to improve performance, things like micro-ops fusion and a dedicated stack manager are also at play. We've talked in detail about all of the features that went into the first Pentium M and its later 90nm revision (Dothan), but the end result is a CPU that is highly competitive with the Athlon 64 and the Pentium 4 in notebooks.

Take the first Pentium Ms for example; at 1.6GHz, the first Pentium Ms were faster than 2.66GHz Pentium 4s in notebooks in business and content creation applications. More recently, the first 2.0GHz Pentium Ms based on the Dothan core managed to outperform the Pentium 4 3.2GHz and the Athlon 64 3000+. Pretty impressive for a notebook platform, but what happens when you make the move to the desktop world?

On the desktop, the Pentium 4 runs at higher clock speeds, as does the Athlon 64. Both the Pentium 4 and Athlon 64 have dual channel DDR platforms on the desktop, unlike the majority of notebooks out there. Does the Pentium M have what it takes to be as competitive on the desktop as it is in the mobile sector? Now that the first desktop Pentium M motherboards are shipping, that's why this review is here - to find out.

Problem #1: Can't Use Desktop Chipsets
POST A COMMENT

76 Comments

View All Comments

  • saratoga - Tuesday, February 08, 2005 - link

    Overall this artical brings up a lot of the points missing in other Dothan reviews. Very nice work. Too many people have looked at a few benchmarks, bashed Intel for the P4, and missed the whole issue here.

    Intel isn't stupid. Its obvious they don't think Dothan will work in its current form as a desktop chip, and thats why they're still sticking with Prescott at the moment, and only bring the P-M over much later in a reworked form with Yohan. Assuming they ever do introduce a desktop chip based on the P-M.

    Also, siginificant scaling out of Dothan seems unlikely. They'll probably get a few more speed grades out of it, but whoever was saying 3GHz was dreaming. Maybe at 65nm, but that sure as heck won't be dothan, and it won't be for a while yet.
    Reply
  • PrinceGaz - Tuesday, February 08, 2005 - link

    Well put classy, the P-M is a chip that at least in its current form can never be a desktop processor because of severe weaknesses in several areas.

    A faster dual-channel chipset will never make up for its poor FPU performance in heavy-duty applications, something I'd heard about many months ago but hadn't seen reliable benchmarks of until now.

    If you want to do word-processing or browse the web, I'm sure the P-M will be very efficient. If you want to run the sort of spplications that seriously test a processor and are the reason you'd buy it in the first place for a desktop PC, then the P-M falls far short of the mark, in fact it is so far behind at times that it is embarrasing.

    But you don't get anything for nothing, the P-M is great at doing easy stuff very quickly which is what laptops are used ofr mainly; but when the going gets tough, you want a real desktop processor like the A64 to keep things moving.
    Reply
  • classy - Tuesday, February 08, 2005 - link

    T8000

    What part don't you understand? The Pentium M has been reviewed all over the net. Out of all the reviews only one reviewer hit 2.8. Everyone else, was similar to Anandtech's results. 2nd I don't no where you been, but every review of an FX55 I have seen it routinely hits 2.8 with no problem. And almost all the lower speeds hit the 2.6-2.7 ballpark. Not mention that a small increase with A64 is much more signifcant than even a modest Intel OC because of the architecture of the A64 cpus. Hey everyone has a favorite cpu, video card, or motherboard maker. But when something is better, its just better. And for anyone to even remotely argue the Pentium M as a challenge to the A64 cpus is a bit silly. This chip reminds me a lot of the old 366@550 celery chips. IF you got a 366 to do 550 it was a great chip because it gave you nice performance for the price. The Pentium M doesn't have a price advantage and is on a platform that is outdated. IF you can overclock it to decent levels it performs pretty good in some aspects but still sucks in many others. The problem is IF. But as I stated ealier IF is out for the evening with MAYBE.
    Reply
  • LackofVision - Tuesday, February 08, 2005 - link

    I couldn't disagree more with the conclusions in this article.

    Anyone who can't see the promise of a desktop processor design based on the banias in't going beyond just looking at the numbers. Especially when you start thinking down the road about dual core's and the heat and performance bottlenecks associated with them.

    So because the banias can't outperform the p4 or athlon64 in every benchmark, when hamstrung by an outdated chipset, and designed primarily for low power usage, the processor won't be competitive when running on a modern subsystem with a re tuning of the core design to make it more suited to the desktop?

    Nothing like comparing apples to oranges and then drawing a conclusion on what a pear tastes like.
    Reply
  • jamawass - Tuesday, February 08, 2005 - link

    I doubt it, Intel makes huge profits by putting a price premium for mobile processors. They won't jeopardize this for a few enthusiasts. Reply
  • KristopherKubicki - Tuesday, February 08, 2005 - link

    FrostAWOL, #51: What's your point? Those HP blade servers run Pentium Ms and there is no mention of Pentium 4 anywhere.

    Kristopher
    Reply
  • HardwareD00d - Tuesday, February 08, 2005 - link

    Pentium M = Yawn

    Reply
  • T8000 - Tuesday, February 08, 2005 - link

    #52
    Since it is very rare to see an A64 CPU overclock above the available speeds without subzero cooling, the comparision would likely be between a 2.4 or maybe 2.6 Ghz A64 and a 2.8 Ghz P-M.

    Also, P-M CPU's with higher multipliers usually overclock better due to the limited FSB possibilities of the i855 chipset. This could explain why Anand did not reach 2.8 Ghz in this review.
    Reply
  • dobwal - Tuesday, February 08, 2005 - link

    While i think that this is a good article. Allowing us to see the performance of the dothan in its current state against desktop cpus. Some of the conclusions that are made by the author don't take account of alot factors.

    1. "The problem is that in the transition to the desktop world, its competitors get much more powerful, while the Pentium M is forced to live within its mobile constraints."

    How can this statement be valid. The mobile constraints on the Dothan is never really removed. Nothing is really done to try to make the mobile dothan mimick a (possible) desktop variation of itself. Do you really think there is a chance for a official desktop dothan running at 2.4 with DDR 333 in single channel with 533 FSB. How about re-running these benchmarks along with a 3.2Ghz P4 with DDR 333 (single channel) and a FSB speed of 533.

    2. "The fundamental issue is that although the Pentium M is surprisingly competitive with the Athlon 64 on a clock for clock basis, the Pentium M's architecture can't scale to the same clock speeds that the Athlon 64 can. The fact of the matter is that while the Pentium M will hit 2.26GHz by the end of 2005, the Athlon 64 will be on its way to 3.0GHz and beyond."

    The fact of the matter is you are comparing the scalability of the king of mobile chips vs. the scalability of the king of desktop chip and making an assumption without taking account of all the factors involved. The fact is we do not know the scalability of the dothan without its mobile constraints. Even more so, we don't know the true scalability of the mobile Dothan. What other mobile cpu offers the same level of performance vs. battery life.

    Its more profitable for a company to retard performance increases of its cpu if there is no other cpu that can offer the same level of performance currently or in the near future. Revisions or new steppings increase cost.

    AMD is in the same boat with the A64.
    How long has A64 been stuck on 2.4Ghz. Most of the latest PR number increases with relation to A64 have come from HT increases, dual channel and moving from 754 to 939. Imagine the scenario of where the Prescott worked as intended and the Tejas was around the corner. Do you think that the A64 would be still at 2.4Ghz or more like 3.0 or 3.2Ghz.

    While some of the conclusions could be seen as true under the circumstances of Intel never officially introducing the dothan to the desktop world. Where all we get are mobile Dothans on chipsets with desktop features.

    However, these benchmarks can't prove or disprove the viability of a dothan that was devised by Intel to be a desktop competitor.
    Reply
  • classy - Tuesday, February 08, 2005 - link

    #55

    IF Intel does this. IF Intel does that. Unfortunately IF left with MAYBE and they went to the movies to see the new #1 movie from Intel, Could Have, But Didn't, starring Mr Dothan CPU. :)
    Reply

Log in

Don't have an account? Sign up now