The origin of Turbo Mode isn't Penryn, but rather Intel's DAT first found in the Santa Rosa mobile chipset, of which both Merom and Penryn were compatible (and functioning with the tech activated).
Outside of games the only area where raw performance matters to me is running high performance code - for me this is graphics processing and 3D rendering code, and it's just a hobby. I have some dealings with the HPC crowd though.
In the HPC biz memory bandwidth is a big issue. AMD has won hands down on that metric against Intel until perhaps the past six months. Nehalem server chip looks like it will beat Opteron on this metric.
Another important metric for HPC is linear algebra performance. GPUs are very good at linear algebra, but GPUs have strange programming requirements for the people who understand scientific programming - these people want to worry about the science or engineering and not about the specific cache architecture of an Nvidia or ATI GPU.
Just because I could, last winter I wrote some 2D graphics processing routines for an 8800GT+CUDA+AthlonX2-5200: gaussian blur, sharp filter, the like. I achieved on the order of 20x speed improvement on the 8800GT vs the AthlonX2 all on Linux - but it was a moderately brutal programming experience and I doubt your average researcher will do it. And, well, the PCIe bandwidth bottleneck would be a problem for large scale batch processing for such a simple calculation.
I don't know about ATI GPUs yet. I got a 3870 eight months ago and installed the AMD HPC GPU SDK ('nuff acromyms for ya?), but I can't face the pain of using it if after booting all the way into XP I could be fragging away in HL2 or Q4, or conquering the world in Civ2 Gold instead - and nobody really uses Windows servers for HPC clusters anyway. I think about write Brook+ code on my 3870 sometimes but honestly I don't care that much. It'll be similar performance to the 8800, and it'll be *Windows* code.
If Intel can produce a chip that is somewhere between Larabee and Nehalem, matching memory bandwidth with an easily programmed but highly parallel chip then Intel will have an opportunity to define a new sub-market: HPC processors.
It is indicative of the deficiencies of AMD marketing that it has a good GPGPU and the only way to program it is on the one OS that HPC shies away from: Windows. Clusters mostly run on Linux or UNIX.
But, AMD is working on a CPU+GPU product that could compete in that market.
Which of AMD or Intel will realize that there is money to be made with a chip that combines 2 or 4 CPU cores + 4 or 8 GPU style linear algebra cores, all with IEEE double precision ability?
I think you are minimizing what an operating system is. While it is true that Linux, AIX and Solaris account for a large number of HPC and cluster environments that doesn't mean Windows is poor in this regard.
There are solid options for windows HPC where infiniband is very, very solid. Microsoft helped define the spec. However, it isn't done often enough in the public's eye like Linux. Remember, AIX and Windows were the most solid platforms for J2EE for a long, long time.
Also, Windows Clusters can be the better TCO solution for people. EVE-Online used Windows 2000 (now 2003 x86 and x64 ) and wrote their own load balancing software in Stackless ( and now have their own async IO stackless library ) which holds 33k at any given time of the day.
You really just have to decide what market you are trying to reach when considering OS choices. They all can provide similar performance.
I thought I read somewhere that if the other processor cores where not working then they shut down and the one that was working got more juice and overclocked.. So wouldn't that suggest that for the average consumer This chip will game much faster than a penryn?
For desktop builders.
My aging S754 Athlon64 is dying.. so it's time to start thinking of building a new one. My laptop can only hold me out for so long, though..
Will I be able to buy a quad-core Nehalem processor in about the $250-300 range by the end of the year?
Core i7 wins big time for 3D rendering, modeling, and CAD programs.
Turbo is the best feature. I hope, at least in the Extreme Edition we can set the Turbo headroom to like 5Ghz!!! and have a totally dynamic over-clock scaling FTW!
Zbrush can utilize 256 processors, so I think a 2 socket Core i7 will help me out just fine. Sure is don't automatic boost FPS in games, but but that's partially a programming issue as well. Sooner or later the coding will catch up.
Well it is another step forward for Intel while AMD is still falling farther and farther behind the times. I want to caution that at this point there is no software actually optimized to run on i7 and any potential new instructions the chips will have. Once that happens and games are patched/recompiled or new games come out to take advantage of the massive CPU/memory bandwidth i7 offers it will be lights out.
Waiting on AMD to come out with the next best thing is becoming really old. I have a Phenom system, I won't need a new one for at least another year or two but even though I wish AMD would do better they're just being dominated by intel right now.
wow, this whole cpu is a copy of a amd cpu and you expect amd fan boys to not get amd with you, secondly this fantasy is baseless until you can compare it to an offering from the AMD team (Shanghai & Deneb). AMD is still KING with there OPTERON and most likely will be in the future with there new cpu coming soon for the server and also for the desktop.
OK DORK, am sure you have never made a mistake (there=their) Duh. I bet your some kid all hyped up for the i7 who wishes Xmas comes early lol. Anyway it’s not a desktop chip, it’s a sever chip DUH. It’s meant to compete with the AMD Opteron chip (the best). Although Opteron will lose its crown, it won’t be to i7 but to Shanghai (AMD new latest and greatest). And like I said b4, Deneb will clear anything up out of place.
The reason amd does not grab a microphone and star shouting at the top of their voice is because amd doesn't have the resources and money in comparison with Intel. If it reveals too much about its future strategy and Intel likes that strategy (like the Opteron, HT, On-board mem etc) there is a big theoretical chance that Intel could take this idea and deliver a product well before AMD. So it’s not over until amd says its.
nehalem fails,it was supposed to be superior to core 2, intel was against the wall this time, why?, because, the old front bus architecture was lagging more more in the server arena and becoming a bottleneck ,compared to hypertransport, so intel is forced to abandon the front side bus, but the strong point of core 2 is that because you don't have and integrated memory controller you can stuff the processor with a huge L2 cache.
so, nehalem sucks in gaming,there is no way that the enthusiast is going to pay more for a processor that produce less fps that they actually have.
and the hyperthreading is a risky move, hypertrheading is known por being power hungry, and although produce gains in some applications,some servers applications actually runs slower, so in many cases the old hyperthreading had to be disabled.
nehalem is crippled for the enthusiast,and the regular user.
You musta missed where Anand says several times its not intended for better gaming? It will be significantly faster than Penryn for multithreaded applications. I guess I don't see how this makes it "fail". Maybe in your fantasy world where 90% of the CPU market are "enthusiasts".
first of all the enthusiast market is a very tiny niche, it would not kill intel if you were right.
but you are not. the L2 of penryn (and banias) is much more like the nehalem L3 than the nehalem L2. and if you have a single threaded game it now has 8mb at similar latencies, but with a second buffer, the 256k L2, and a MUCH smaller cache miss penalty.
concerning hyperthreading: please read the article first. nehalem switches off what it does not need, powerwise. and about fiftytwo other very vaild arguments.
I don't think Nehalem will "fail" at running games... in fact I expect it to be faster than Penryn at equivalent clocks. I just don't expect it to be significantly faster. We'll know soon enough, though.
I'd agree with the OP, maybe with a different choice of words. Also, Johan seems to disagree with you Jarred, citing the decreased L2 and slower L3 along with similar maximum clock speeds as to why Nehalem may perform worst than Penryn at the same clocks in games. We've already seen a very preliminary report substantiating this from Hexus:
There's still a lot of questions about overclockability as well without a FSB. It appears Nehalem will run slightly hotter and draw a bit more power under load than Penryn with similar maximum clockspeeds when overclocked.
What's most disappointing is all of these new, hyped features seem to amount to squat with Nehalem. Where's the benefit of the IMC? Where's the benefit of the triple channel memory bus? Where's the benefit of HT? I know....I know...its for the server market/HPC crowd, but no one finds it odd that something like an IMC is resulting in negligible gaming gains when it was such a boon for AMD's Athlon 64?
All in all, very disappointing for the Gamer/Enthusiast as we'll most likely have to wait for Westmere for a significant improvement over Penryn (or even Kentsfield) in games.
From looking at that article, I have doubts that it could be worse than penryn in games. I also doubt that the performance increase is going to be the same as it was for Core 2. People are expecting this and since their figuring out that their unrealistic expectations can't be realized in Nehalem, they call Nehalem trash and say they'll go back to amd, ignoring the fact that Intel is still king in gaming with EXISTING processors. It is also not like 50 fps at extremely high setting and resolution isn't playable.
Given that is early hardware, most likely not fully optimized, and performance is still substantially better than the QX6800 in several gaming tests (remember I said clock-for-clock I expected it to be faster, not necessarily faster than a 3.2GHz part when it's running at 2.93GHz).... Well, it's going to be close, but overall I think the final platform will be faster. Johan hasn't tested any games on the platform, so that's just speculation. Anyway, it's probably going to end up being a minor speed bump for gaming, and a big boost everywhere else.
It was my understanding that tick was the new architecture and tock was the shrink. I thought Conroe was tick, Penryn tock, then Nehalem tick, and the 32nm shrink tock?
I think we're facing a strange issue right now, and it's about the degree of usefulness of Core i7 for the average user. Enterprise clients will surely benefit a lot from all the improvements introduced, but I would have hard time convincing my friends to upgrade from their E8400...
And it's not about badly-threaded applications, the ones an average user has installed simply don't require such immense computing power. Look at your browser, or e-mail client - the tasks they execute are pretty well split over different threads, but they simply don't need 8 logical CPUs to run fast, most of them simply sit idle. It's a completely different matter that many applications who happen to be resource-hungry can't be parallelized to a reasonable degree - think about encryption/decryption, or similar, for example.
It seems to me that the Gustafson's law begins to speak here - if you can't escape your seqential code, make your task bigger. So expect something like UltraMegaHD soon, then try to transcode it in real time... But then again... who needs that? It seems like a cowardly manner to me.
Is it that software is laging behing technology? We need new OSes, completely different ways of interaction with our computers in order to justify so much GFLOPs on the desktop, in my opinion. Or that's at least what I'm dreaming of. The level of man-machine interaction has barely changed since the mouse was invented, and modern OSes are very similar to the very first OSes with a GUI, maybe somewhere in the 80s, or even earlier. But for that time span CPUs have increased their capabilities almost exponentially, can you spot a problem here?
The issue becomes clearer if we talk about 16-threaded octacores... God, I can't think of an application that would require that. (Let's be honest - most people don't transcode video or fold proteins all the time...). I think it would be great if a new Apple or Microsoft would emerge from some small garage, to come and change the old picture for good. The only way to justify technology would be to change the whole user-machine paradigm significantly, going the old way leads to nowhere, I suspect.
So eventually we may end up with a separate class of systems that are akin to ‘proper’ sports cars with prices to match. Intel seemingly already sees this as a potential which is why they are releasing a separate high end platform for Nehalem. Unless a new killer app or two is found that appeals to the mainstream I can’t see many people wanting to pay a large premium for these systems since as the entry level performance continues to rise less and less people require anything more than what they offer.
One thing mitigating against this is the current relatively low cost of workstation/server Dual Processor components which should continue to be affordable due to the strong demand for them within businesses. It’s foreseeable that it might eventually be more cost affective to build a DP workstation system than a UP high end desktop. This matches Apple’s current product range where they completely miss out on high end UP desktops and jump straight from mid range UP desktop to DP workstation.
It's 20-bits but using a standard 8/10 encoding mechanism, so of the 20 bits only 16 are used to transmit data and the other four bits are (I believe) for clock signaling and/or error correction. It's the same thing we see with SATA and HyperTransport.
Since the PCU has a firmware, I wonder if it will be updatable? It would be useful if lessons learn in the power management logic of later steppings and in Westmere can be brought back to all Nehalems through a firmware update for lower power consumption or even better performance with better Turbo mode application. Although a failed or corrupt firmware update on a CPU could be very problematic.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
35 Comments
Back to Article
rflcptr - Wednesday, September 24, 2008 - link
The origin of Turbo Mode isn't Penryn, but rather Intel's DAT first found in the Santa Rosa mobile chipset, of which both Merom and Penryn were compatible (and functioning with the tech activated).hoohoo - Wednesday, September 10, 2008 - link
Outside of games the only area where raw performance matters to me is running high performance code - for me this is graphics processing and 3D rendering code, and it's just a hobby. I have some dealings with the HPC crowd though.In the HPC biz memory bandwidth is a big issue. AMD has won hands down on that metric against Intel until perhaps the past six months. Nehalem server chip looks like it will beat Opteron on this metric.
Another important metric for HPC is linear algebra performance. GPUs are very good at linear algebra, but GPUs have strange programming requirements for the people who understand scientific programming - these people want to worry about the science or engineering and not about the specific cache architecture of an Nvidia or ATI GPU.
Just because I could, last winter I wrote some 2D graphics processing routines for an 8800GT+CUDA+AthlonX2-5200: gaussian blur, sharp filter, the like. I achieved on the order of 20x speed improvement on the 8800GT vs the AthlonX2 all on Linux - but it was a moderately brutal programming experience and I doubt your average researcher will do it. And, well, the PCIe bandwidth bottleneck would be a problem for large scale batch processing for such a simple calculation.
I don't know about ATI GPUs yet. I got a 3870 eight months ago and installed the AMD HPC GPU SDK ('nuff acromyms for ya?), but I can't face the pain of using it if after booting all the way into XP I could be fragging away in HL2 or Q4, or conquering the world in Civ2 Gold instead - and nobody really uses Windows servers for HPC clusters anyway. I think about write Brook+ code on my 3870 sometimes but honestly I don't care that much. It'll be similar performance to the 8800, and it'll be *Windows* code.
If Intel can produce a chip that is somewhere between Larabee and Nehalem, matching memory bandwidth with an easily programmed but highly parallel chip then Intel will have an opportunity to define a new sub-market: HPC processors.
It is indicative of the deficiencies of AMD marketing that it has a good GPGPU and the only way to program it is on the one OS that HPC shies away from: Windows. Clusters mostly run on Linux or UNIX.
But, AMD is working on a CPU+GPU product that could compete in that market.
Which of AMD or Intel will realize that there is money to be made with a chip that combines 2 or 4 CPU cores + 4 or 8 GPU style linear algebra cores, all with IEEE double precision ability?
Whither the Cell?
:-)
hooflung - Monday, November 3, 2008 - link
I think you are minimizing what an operating system is. While it is true that Linux, AIX and Solaris account for a large number of HPC and cluster environments that doesn't mean Windows is poor in this regard.There are solid options for windows HPC where infiniband is very, very solid. Microsoft helped define the spec. However, it isn't done often enough in the public's eye like Linux. Remember, AIX and Windows were the most solid platforms for J2EE for a long, long time.
Also, Windows Clusters can be the better TCO solution for people. EVE-Online used Windows 2000 (now 2003 x86 and x64 ) and wrote their own load balancing software in Stackless ( and now have their own async IO stackless library ) which holds 33k at any given time of the day.
You really just have to decide what market you are trying to reach when considering OS choices. They all can provide similar performance.
Pixy - Friday, September 5, 2008 - link
All this sounds nice... but I have a question: when will laptops become fanless? The CPU is fast enough, work on turning down the heat!Davinchy - Tuesday, August 26, 2008 - link
I thought I read somewhere that if the other processor cores where not working then they shut down and the one that was working got more juice and overclocked.. So wouldn't that suggest that for the average consumer This chip will game much faster than a penryn?Dunno Maybe I read it wrong
jediknight - Sunday, August 24, 2008 - link
For desktop builders.My aging S754 Athlon64 is dying.. so it's time to start thinking of building a new one. My laptop can only hold me out for so long, though..
Will I be able to buy a quad-core Nehalem processor in about the $250-300 range by the end of the year?
UnlimitedInternets36 - Saturday, August 23, 2008 - link
Core i7 wins big time for 3D rendering, modeling, and CAD programs.Turbo is the best feature. I hope, at least in the Extreme Edition we can set the Turbo headroom to like 5Ghz!!! and have a totally dynamic over-clock scaling FTW!
Zbrush can utilize 256 processors, so I think a 2 socket Core i7 will help me out just fine. Sure is don't automatic boost FPS in games, but but that's partially a programming issue as well. Sooner or later the coding will catch up.
munyaka - Friday, August 22, 2008 - link
I have always stuck with amd but this is the final Nail in the coffin.X1REME - Friday, August 22, 2008 - link
Is there anything you see that we don't, please explain why?niva - Friday, August 22, 2008 - link
Well it is another step forward for Intel while AMD is still falling farther and farther behind the times. I want to caution that at this point there is no software actually optimized to run on i7 and any potential new instructions the chips will have. Once that happens and games are patched/recompiled or new games come out to take advantage of the massive CPU/memory bandwidth i7 offers it will be lights out.Waiting on AMD to come out with the next best thing is becoming really old. I have a Phenom system, I won't need a new one for at least another year or two but even though I wish AMD would do better they're just being dominated by intel right now.
qurious69ss - Friday, August 22, 2008 - link
You sound like one of those sad fanboys from amdzone. Tell dimentia to get a life.X1REME - Friday, August 22, 2008 - link
wow, this whole cpu is a copy of a amd cpu and you expect amd fan boys to not get amd with you, secondly this fantasy is baseless until you can compare it to an offering from the AMD team (Shanghai & Deneb). AMD is still KING with there OPTERON and most likely will be in the future with there new cpu coming soon for the server and also for the desktop.DigitalFreak - Friday, August 22, 2008 - link
Learn to spell, you goober.X1REME - Friday, August 22, 2008 - link
OK DORK, am sure you have never made a mistake (there=their) Duh. I bet your some kid all hyped up for the i7 who wishes Xmas comes early lol. Anyway it’s not a desktop chip, it’s a sever chip DUH. It’s meant to compete with the AMD Opteron chip (the best). Although Opteron will lose its crown, it won’t be to i7 but to Shanghai (AMD new latest and greatest). And like I said b4, Deneb will clear anything up out of place.The reason amd does not grab a microphone and star shouting at the top of their voice is because amd doesn't have the resources and money in comparison with Intel. If it reveals too much about its future strategy and Intel likes that strategy (like the Opteron, HT, On-board mem etc) there is a big theoretical chance that Intel could take this idea and deliver a product well before AMD. So it’s not over until amd says its.
snakeoil - Thursday, August 21, 2008 - link
nehalem fails,it was supposed to be superior to core 2, intel was against the wall this time, why?, because, the old front bus architecture was lagging more more in the server arena and becoming a bottleneck ,compared to hypertransport, so intel is forced to abandon the front side bus, but the strong point of core 2 is that because you don't have and integrated memory controller you can stuff the processor with a huge L2 cache.so, nehalem sucks in gaming,there is no way that the enthusiast is going to pay more for a processor that produce less fps that they actually have.
and the hyperthreading is a risky move, hypertrheading is known por being power hungry, and although produce gains in some applications,some servers applications actually runs slower, so in many cases the old hyperthreading had to be disabled.
nehalem is crippled for the enthusiast,and the regular user.
nuff said.
AssBall - Saturday, August 23, 2008 - link
You musta missed where Anand says several times its not intended for better gaming? It will be significantly faster than Penryn for multithreaded applications. I guess I don't see how this makes it "fail". Maybe in your fantasy world where 90% of the CPU market are "enthusiasts".snakeoil - Saturday, August 23, 2008 - link
enthusiasts drive the market you fruityassUnlimitedInternets36 - Saturday, August 23, 2008 - link
LOL this year Satan err Santa is going to take away your PC because you don't deserve to have one anymore You Jaded nerd.Gasaraki88 - Friday, August 22, 2008 - link
Thanks! I never knew there was a expert on CPU design in the house. I've learn so much from your well researched, tested and thought out comment...pool1892 - Friday, August 22, 2008 - link
first of all the enthusiast market is a very tiny niche, it would not kill intel if you were right.but you are not. the L2 of penryn (and banias) is much more like the nehalem L3 than the nehalem L2. and if you have a single threaded game it now has 8mb at similar latencies, but with a second buffer, the 256k L2, and a MUCH smaller cache miss penalty.
concerning hyperthreading: please read the article first. nehalem switches off what it does not need, powerwise. and about fiftytwo other very vaild arguments.
smilingcrow - Thursday, August 21, 2008 - link
You do realise that you aren’t meant to drink the snakeoil as it can rot your brain….JarredWalton - Friday, August 22, 2008 - link
I don't think Nehalem will "fail" at running games... in fact I expect it to be faster than Penryn at equivalent clocks. I just don't expect it to be significantly faster. We'll know soon enough, though.chizow - Friday, August 22, 2008 - link
I'd agree with the OP, maybe with a different choice of words. Also, Johan seems to disagree with you Jarred, citing the decreased L2 and slower L3 along with similar maximum clock speeds as to why Nehalem may perform worst than Penryn at the same clocks in games. We've already seen a very preliminary report substantiating this from Hexus:http://www.hexus.net/content/item.php?item=15015&a...">http://www.hexus.net/content/item.php?item=15015&a...
There's still a lot of questions about overclockability as well without a FSB. It appears Nehalem will run slightly hotter and draw a bit more power under load than Penryn with similar maximum clockspeeds when overclocked.
What's most disappointing is all of these new, hyped features seem to amount to squat with Nehalem. Where's the benefit of the IMC? Where's the benefit of the triple channel memory bus? Where's the benefit of HT? I know....I know...its for the server market/HPC crowd, but no one finds it odd that something like an IMC is resulting in negligible gaming gains when it was such a boon for AMD's Athlon 64?
All in all, very disappointing for the Gamer/Enthusiast as we'll most likely have to wait for Westmere for a significant improvement over Penryn (or even Kentsfield) in games.
cornelius785 - Sunday, August 24, 2008 - link
From looking at that article, I have doubts that it could be worse than penryn in games. I also doubt that the performance increase is going to be the same as it was for Core 2. People are expecting this and since their figuring out that their unrealistic expectations can't be realized in Nehalem, they call Nehalem trash and say they'll go back to amd, ignoring the fact that Intel is still king in gaming with EXISTING processors. It is also not like 50 fps at extremely high setting and resolution isn't playable.JarredWalton - Friday, August 22, 2008 - link
Given that is early hardware, most likely not fully optimized, and performance is still substantially better than the QX6800 in several gaming tests (remember I said clock-for-clock I expected it to be faster, not necessarily faster than a 3.2GHz part when it's running at 2.93GHz).... Well, it's going to be close, but overall I think the final platform will be faster. Johan hasn't tested any games on the platform, so that's just speculation. Anyway, it's probably going to end up being a minor speed bump for gaming, and a big boost everywhere else.Felofasofanz - Thursday, August 21, 2008 - link
It was my understanding that tick was the new architecture and tock was the shrink. I thought Conroe was tick, Penryn tock, then Nehalem tick, and the 32nm shrink tock?npp - Thursday, August 21, 2008 - link
I think we're facing a strange issue right now, and it's about the degree of usefulness of Core i7 for the average user. Enterprise clients will surely benefit a lot from all the improvements introduced, but I would have hard time convincing my friends to upgrade from their E8400...And it's not about badly-threaded applications, the ones an average user has installed simply don't require such immense computing power. Look at your browser, or e-mail client - the tasks they execute are pretty well split over different threads, but they simply don't need 8 logical CPUs to run fast, most of them simply sit idle. It's a completely different matter that many applications who happen to be resource-hungry can't be parallelized to a reasonable degree - think about encryption/decryption, or similar, for example.
It seems to me that the Gustafson's law begins to speak here - if you can't escape your seqential code, make your task bigger. So expect something like UltraMegaHD soon, then try to transcode it in real time... But then again... who needs that? It seems like a cowardly manner to me.
Is it that software is laging behing technology? We need new OSes, completely different ways of interaction with our computers in order to justify so much GFLOPs on the desktop, in my opinion. Or that's at least what I'm dreaming of. The level of man-machine interaction has barely changed since the mouse was invented, and modern OSes are very similar to the very first OSes with a GUI, maybe somewhere in the 80s, or even earlier. But for that time span CPUs have increased their capabilities almost exponentially, can you spot a problem here?
The issue becomes clearer if we talk about 16-threaded octacores... God, I can't think of an application that would require that. (Let's be honest - most people don't transcode video or fold proteins all the time...). I think it would be great if a new Apple or Microsoft would emerge from some small garage, to come and change the old picture for good. The only way to justify technology would be to change the whole user-machine paradigm significantly, going the old way leads to nowhere, I suspect.
gamerk2 - Tuesday, August 26, 2008 - link
Actually, Windows is the biggest bottleneck there is. Windows doesn't SCALE. It does limited tasks well, but breaks under heavy loads.Even the M$ guys are realizing this. Midori will likely be the new post-windows OS after Vista SP2 (windows 7).
smilingcrow - Thursday, August 21, 2008 - link
So eventually we may end up with a separate class of systems that are akin to ‘proper’ sports cars with prices to match. Intel seemingly already sees this as a potential which is why they are releasing a separate high end platform for Nehalem. Unless a new killer app or two is found that appeals to the mainstream I can’t see many people wanting to pay a large premium for these systems since as the entry level performance continues to rise less and less people require anything more than what they offer.One thing mitigating against this is the current relatively low cost of workstation/server Dual Processor components which should continue to be affordable due to the strong demand for them within businesses. It’s foreseeable that it might eventually be more cost affective to build a DP workstation system than a UP high end desktop. This matches Apple’s current product range where they completely miss out on high end UP desktops and jump straight from mid range UP desktop to DP workstation.
retardliquidator - Thursday, August 21, 2008 - link
... think again.more luck next time before starting the flamebait about not two bytes wide but 20bits.
effective usable speed is exactly 2bytes, as with 10/8 coding you need 20bits to encode your 16 relevant ones.
you fail at failing.
defter - Friday, August 22, 2008 - link
Links are 20-bit wide, regardless of encoding or whether 1,2,8,16 or 20 bits are used to tranmist the data.I wonder who is flamebaiting here, a previous poster just mentioned the correct link width, he wasn't talking about "usable speed".
rbadger - Thursday, August 21, 2008 - link
"Each QPI link is bi-directional supporting 6.4 GT/s per link. Each link is 2-bytes wide..."This is actually incorrect. Each link is 20 bits wide, not 16 (2 bytes). This information is on the slide posted directly below the paragraph.
JarredWalton - Thursday, August 21, 2008 - link
It's 20-bits but using a standard 8/10 encoding mechanism, so of the 20 bits only 16 are used to transmit data and the other four bits are (I believe) for clock signaling and/or error correction. It's the same thing we see with SATA and HyperTransport.ltcommanderdata - Thursday, August 21, 2008 - link
Since the PCU has a firmware, I wonder if it will be updatable? It would be useful if lessons learn in the power management logic of later steppings and in Westmere can be brought back to all Nehalems through a firmware update for lower power consumption or even better performance with better Turbo mode application. Although a failed or corrupt firmware update on a CPU could be very problematic.wingless - Thursday, August 21, 2008 - link
I thought about this when I read about it the first time too. Flashing your CPU could kill the power management or the whole CPU in one fell swoop!