POST A COMMENT

104 Comments

Back to Article

  • Azethoth - Wednesday, May 01, 2013 - link

    Oh man I cannot wait for my new Haswell desktop build. As such the built in GPU is probably only useful for quicksync and whatever OpenCL software can squeeze out of it in addition to the discrete GPU. Do we know yet if the fastest GPU (with the packaged memory) will be available for the fastest destop sku or will it be for low power mobile / ultrabooks only?

    Also will it perhaps be possible to do 2D layers and compositing on the built in Iris Graphics and purely the 3D on the discrete GPU? Something like the LucidLogix stuff on my current P8Z77V Deluxe?
    Reply
  • skiboysteve - Thursday, May 02, 2013 - link

    Latest Intel drivers allow that without anything special like Lucid's stuff. On windows 8 only Reply
  • marc1000 - Thursday, May 02, 2013 - link

    from the info, I understand that there will be a desktop version of it with the eDRAM, it will be the -R cpus. it will only be available in BGA though, and this means you will have to buy it soldered on the motherboard. but it will exist. I just hope it has unlocked multiplier at least. Reply
  • just4U - Sunday, May 05, 2013 - link

    I'd like to know what your current cpu is if you can't wait for a Haswell. If you say you have a i5 (or better) sandy/ivy bridge than im going to smack you :P

    Kidding aside, If your on a older comp than maybe haswell is a good buy. Personally I don't see it for anyone on a intel i5/7 made in the last couple of years.
    Reply
  • HodakaRacer - Wednesday, May 01, 2013 - link

    I don't see any mention of "iris pro" for a notebook/tablet/convertible. I was hoping for something akin to the razor edge with GT3e, or a lighter notebook ( I know you can't have a 45w TDP ultrabook). The lack of discrete graphics should allow OEMs to make smaller and lighter form factors that can still play games at lower settings Reply
  • HodakaRacer - Wednesday, May 01, 2013 - link

    or at least they don't show it on the notebook benchmark Reply
  • whatthehey - Thursday, May 02, 2013 - link

    I think the i7-4950HQ is a GT3e part, isn't it? The i7-4900MQ is the GT3 I'm pretty sure, so that would make sense. Reply
  • cmikeh2 - Thursday, May 02, 2013 - link

    That's what I thought as well. The performance gap definitely suggests it. And I must say that's a very high SKU number for a non Extreme line product. Reply
  • IntelUser2000 - Thursday, May 02, 2013 - link

    Only the "HQ" parts are GT3. The MQ parts are regular GT2 parts. Reply
  • silverblue - Friday, May 03, 2013 - link

    Always reassuring to know you're buying a medium quality product... they must've known that would be inferred from MQ/HQ. Reply
  • ssiu - Wednesday, May 01, 2013 - link

    Would Windows 8 tablets use the 15W Haswell processors, or would they all shift to the ~10W Y-series processors for better battery life but same-or-worse performance?

    And it seems many (most?) tablets use single-channel memory (non-upgradable to dual-channel) thus severely handicapping GPU performance. Would that reverse with Haswell tablets? All the hard work going into improving GPU performance being negated by such idiot move ...
    Reply
  • teiglin - Thursday, May 02, 2013 - link

    Do we actually know that Intel is going to be selling Y-series Haswell parts? I had assumed that the Y SKUs were a gimmick to hold people over until Haswell had legit 10W parts. Reply
  • HodakaRacer - Thursday, May 02, 2013 - link

    I believe they are due around October, although I don't remember where I heard that. I believe there will be parts ~7w for tablets and such. perfect timing for the surface pro 2 :) Reply
  • jeffkibuule - Wednesday, May 01, 2013 - link

    Haswell tablets need to focus on getting decent battery life before they start pursing better GPU performance. 4 hours of typical usage is not enough. Reply
  • HodakaRacer - Thursday, May 02, 2013 - link

    They have atom for that. I obviously have not used haswell yet but it sounds like a good balance of performance and power for me. Reply
  • axien86 - Thursday, May 02, 2013 - link

    Countdown to benchmarks between Intel and AMD Richland/Jaguar APUs comparing:

    Latest DX11.1 games at 1920x1080 and higher resolutions with 3-6 monitors

    GPU compute performance with OpenCL

    Crossfire capability with Intel and AMD

    Finally, compare Intel and AMD Richland/Jaguar APU prices...
    Reply
  • HodakaRacer - Thursday, May 02, 2013 - link

    "Latest DX11.1 games at 1920x1080 and higher resolutions with 3-6 monitors"

    haha, if only
    Reply
  • JarredWalton - Thursday, May 02, 2013 - link

    We'll run our standard laptop benchmarks, which means 1366x768 Medium, 1600x900 High, and 1920x1080 Ultra settings for games. I'll see about testing OpenCL as well, if it works properly on Iris/Haswell iGPUs. As for CrossFire capability, I'll test it when possible, but let me tell you: Dual Graphics is not all it's cracked up to be. I'll have an article on the current Trinity + HD 7670M performance in the not-too-distant future, but even when it works, scaling is much lower than desktop CF. Reply
  • mayankleoboy1 - Thursday, May 02, 2013 - link

    If anything, i want detailed review and test of the QuickSync 3.0 .:) Reply
  • whyso - Thursday, May 02, 2013 - link

    "not all its cracked up to be"

    Are you referring to the people who say its good or the people who say its a gimmick.
    Would really like to see that review, its hard to find any hybrid crossfire benchmarks.
    Reply
  • jeffkibuule - Wednesday, May 01, 2013 - link

    If performance really is as claimed, I wonder if Apple will drop dedicated graphics in the 15" Retina MacBook Pro and just stick with Intel... Or we could even see a hypothetical 15" MacBook Air. Reply
  • ISwearImCool - Thursday, May 02, 2013 - link

    They might, but then they have extra space in the chassis of the laptop. Apple gets a lot of milage out of their designs so I expect it to stay.

    This should be huge for the 15 inch retina pro though. Its scrolling is currently less smooth than the 13 because it uses onboard graphics unless doing something intense.
    Reply
  • Death666Angel - Thursday, May 02, 2013 - link

    Huh? The 13" one only has the integrated graphics to work with as well. And it has the slower CPUs as well, with slower max turbo clocks for the iGPU too. Your explanation makes no sense.
    Maybe the 26.5% increased pixel count is the reason for the bad performance.
    Reply
  • akdj - Sunday, May 05, 2013 - link

    "This should be huge for the 15 inch retina pro though. Its scrolling is currently less smooth than the 13 because it uses onboard graphics unless doing something intense."
    Not true at all. There aren't any scrolling issues due to hardware, never was. Poorly coded sites...and non HiDPI ready sites exhibited some scrolling lag...initially. 10.8.2 helped a LOT. 10.8.3 even more significantly. As well, Ars, Facebook, TechCrunch...and a few other media-rich, content heavy, extremely 'over busy' sites have helped a ton with their coding. The Intel 4000 is actually a damn good card and easily able to run HiDPI monitors like Apple's own cinema displays without issue. Not to mention, the 15" includes a discrete, over clocked 650m. If ever you're scrolling/visiting a site that exhibits 'lag', you can easily enable your discrete card to alleviate any challenges. A lot of this has to do with the pixel doubling as well...which you can see is using CPU power (if you're running iStat...or any number of apps that allow you to gauge processor real time power) and the rMBP is more than equipped to handle this upscaling. Again...the updates to ML, coding pros working on busy sites...and the impact that more HiDPI monitors have on the Internet will be the kick in the rear some of these lazy sites need to update their code to perform better.
    The entire 'lag' thing with rMBPs has been blown so SO far out of proportion, it's silly. I know when Anand tested the rMBP initially it was on Lion. 10.7. That is where there were issues. 10.8 helped a lot...and the point updates, as well as the software industry over the past year has helped significantly the early speed bumps the rMBP experienced. Non-optimized apps and software as well as poorly coded websites and low resolution, non-vector imaging. So much has changed over the past 10 months since its release, it's crazy! Very exciting actually, as hopefully these high resolution monitors are ubiquitous sooner than later. Rest assured though...as a rMBP owner (we've got a pair that replaced our 17" 2011 models for primary computers as our business is mobile), it really makes me sad that so much misinformation is still spread on the rMBP. Even as a windows fan, you should be 'rooting' for this evolutionary display update. When using the display for long periods of time, your eyes will thank you!
    Reply
  • seapeople - Friday, May 10, 2013 - link

    So you're telling me that if I walk to Best Buy, set the rMBP to 1920x1200 equivalent, open up AnandTech, zoom in so the text covers most of the screen, and start scrolling around, it's NOT going to lag like crazy? As of a few weeks ago, significant lag was still there... Reply
  • firex123 - Thursday, May 02, 2013 - link

    how would an AMD Temash/Jaguar roughly compare to a tablet Haswell? Objective views pls ;D Reply
  • whatthehey - Thursday, May 02, 2013 - link

    Ugh... that would be a blowout, at least in performance, for Haswell. The current E-series APUs are only slightly better than Atom garbage! Oh, they're faster than ARM as well in most cases, but I think even some of the ARM GPU configurations beat AMD's E-series parts. Temash/Jaguar go quad-core, but that doesn't do anything for lightly threaded workloads, which are still very common. Reply
  • kyuu - Thursday, May 02, 2013 - link

    Obviously Haswell would beat it... Jaguar are low power CPU cores like Atom (although superior). Comparing Jaguar cores to Haswell is inane. Reply
  • Stryfe34 - Thursday, May 02, 2013 - link

    The 4770K is the top of the line desktop processor but yet the 4770R has the highest increase in the GPU. Based on what I've been reading the "R" designator mean that the CPU is embedded to the motherboard so no upgrades so I'm not in the market for it. You'd think they would add that priority to the 4770K also. Reply
  • beginner99 - Thursday, May 02, 2013 - link

    Why? The 4770K is for overclocking and overclockers usually don't run with an iGPU but have a dedicated GPU anyway. Personally the would rather add 2-4 more cores to it and remove the iGPU completely. Reply
  • Kjella - Thursday, May 02, 2013 - link

    People buying an integrated chip where you can't upgrade the CPU or GPU separately probably aren't very interested in upgrades at all, besides Intel's sockets never last longer than a tick-tock so IMO it doesn't make any sense. Personally I'm considering to upgrade an LGA1156 system, already LGA1155 will have come and gone before LGA1150 so why bother? What you can't do is mix and match motherboards and CPU, but then the mainstream boards relying on the Intel chipset are almost clones anyway. The only real downside is that if something breaks outside warranty you must replace the whole thing. Reply
  • Aikouka - Thursday, May 02, 2013 - link

    No, at least from what I gathered, the R-series chips are your normal, desktop LGA-style chips. The point is that we originally thought that the iGPU with embedded DRAM wouldn't be available to desktop users, but that's what this R series is for. Reply
  • Aikouka - Thursday, May 02, 2013 - link

    That 4770R looks like it'd be a great (albeit pricey) replacement for my i3-3225 in my HTPC! The TDP is only 10W higher, so the Streacom case shouldn't be stressed too much. Reply
  • Haserath - Thursday, May 02, 2013 - link

    I hope you test both K and R models on launch. It would be great to know if R is overclockable or not. Reply
  • MrSpadge - Thursday, May 02, 2013 - link

    Yeah.. if the eDRAM works as a nice L4 cahce for the CPU and someone made a high quality mainboard, I could see people buying these instead of regular 4770Ks and pushing for the +25% or +67% OC via the bus. Reply
  • Pappnaas - Thursday, May 02, 2013 - link

    Haswell will win big times if you could play things like WoW@1080P with good settings on it. But if they keep the driver support like it has been, Iris will backfire. Reply
  • perry1mm - Friday, June 07, 2013 - link

    You can do that on Ivy Bridge. I have a Sony Vaio Duo 11 with the i7-3537u (8GB 1600Mhz RAM) and can play WoW at 1080p with a mix of 'Fair' and 'Good' graphics settings (It goes Low, Fair, Good, High, Ultra) @ 40+ FPS (except in a few places it will dip to 30 or so).

    I can only imagine the high voltage Ivy Bridge processors can run WoW even better, and the desktop versions probably somewhere between Good and Ultra.
    Reply
  • iwod - Thursday, May 02, 2013 - link

    I dont like the Pro Branding. Since this actually signals something similar to Workstation Type Graphics, when it is more like a SuperCharged version with eDRAM.

    The Problem is the same, Drivers, Unless Intel is committed to improving Drivers and Performance at a regular schedule, otherwise this means nothing compared to AMD and Nvidia Offering.
    Reply
  • tipoo - Friday, May 03, 2013 - link

    No different than Radeon Pro I suppose, although that was dropped. Reply
  • Ilias78 - Thursday, May 02, 2013 - link

    Sorry Anand, but until i see graphical performance that equals mid-to-high end graphics cards (because i believe that we would all want something like this at the end of the day), i see no revolution from Intel here. Comparing Iris to Radeon and Geforce, is just plain wishful thinking and over-excitement for no real reason. Its just another discrete GPU with higher clocks. Reply
  • mayankleoboy1 - Thursday, May 02, 2013 - link

    +1 Reply
  • ven - Thursday, May 02, 2013 - link

    after reading all the information about haswell it is clear there is no any significant changes for the socket version processor,completely ignoring those version of processor doesn't good look for the PC market(which is already in free fall) at least intel should bring the iris to the low-end "i3" series.

    what intel need now is to get very good support from OEMs, it is high time intel should start supporting video games company and should work closely with them. until their processor capture a place in the devices like vita,edge; those benchmarks bars are nothing.
    Reply
  • Wurmer - Thursday, May 02, 2013 - link

    "At least intel should bring the iris to the low-end "i3" series."

    I agree, It's obvious to me that the best IGP should be paired with lower end CPU. I am still baffeled by Intel's views that insist on pairing the best IGP with the best GPU. It doesn't make much sens. I think they would have much more sucess if CPU of i3 or i5 level could be paired with top of the line IGP. It's been said many times over but if someone is ready to shell out 300$ + for a CPU, he doesn't care much about IGP.
    Reply
  • Spunjji - Thursday, May 02, 2013 - link

    Agreed also. Intel's strategy here is irritatingly abusive towards consumers. I don't see any mention and/or criticism of it in this puff piece, though.

    I'm serious about the above too. I love this site and I dislike the tendency for people to cry bias, but this piece is a press-release with some drooling added to it. Not pretty.
    Reply
  • silverblue - Thursday, May 02, 2013 - link

    AMD is guilty of this as well. I wouldn't mind seeing a lower CPU option combined with the top graphics option; I doubt even that will bottleneck the GPU. Hell, with a lower power CPU, maybe there'd be more in the TDP budget for the GPU.

    AMD could look to diversify their portfolio here.
    Reply
  • ven - Friday, May 03, 2013 - link

    "More TDP budget for GPU & acceptable CPU performance".God! I miss the Nvidia Ion. Reply
  • DigitalFreak - Thursday, May 02, 2013 - link

    "I'm serious about the above too. I love this site and I dislike the tendency for people to cry bias, but this piece is a press-release with some drooling added to it. Not pretty."

    Do you see the word "review" anywhere in this article? dipshit
    Reply
  • Spunjji - Friday, May 03, 2013 - link

    Did I say it was a review? No, I didn't. I did say that I expect a release like this to have a little critical thought applied to it, especially coming from someone with the technical expertise that Anand wields.

    Can you understand that? The concept that your respect for someone may lead to disappointment?

    Now kindly back off.
    Reply
  • formulav8 - Friday, May 03, 2013 - link

    its nothing more than kissing Intel and raising hype Reply
  • Ilias78 - Saturday, May 04, 2013 - link

    Indeed. Until we see some better driver support from Intel, this announcement doesnt mean squat. And honestly Anand, ill say it again: Comparing Iris to Geforce and Radeon, was a straight kiss-up to Intel. Its not even close to that. Reply
  • fteoath64 - Thursday, May 02, 2013 - link

    You can blame Marketing for this. They want customers to buy mid and high-end parts, not so low-end. At the same time, making low-end part (ie the i3) slower than expected would encourage an upgrade to a mid-end part. Again, those pesky marketing tricks used often. Hey, it is the price point. If you buy i3 do not expect i5 performance, if you want that get the i5. It explains why we are seeing less models on offer for the i3 parts. Reply
  • Spunjji - Friday, May 03, 2013 - link

    Probably right. It's easier to just buy an i7 than to try and wade through the spec sheets to figure out whether the features you want are included in a lower-end variant.

    Same game Microsoft likes to play with Windows licencing...
    Reply
  • tipoo - Thursday, May 02, 2013 - link

    3x would still put it under the 650M they compared it to IIRC, the 650 was about 5x faster (lazy and at work so not checking, do correct me if wrong).

    Also how does Intel fare on this whole frame latency thing?
    Reply
  • kallogan - Thursday, May 02, 2013 - link

    2,5x HD 4000 score would give P1750. It's nowhere close to the P2200 of the 650M GT. Reply
  • whyso - Thursday, May 02, 2013 - link

    its about 3 times faster in games. Reply
  • DesktopMan - Thursday, May 02, 2013 - link

    No GT3e for dual core parts, that's surprising. What does this mean for the 13 inch MacBook Pro which up to now was destined for GT3e? Reply
  • fteoath64 - Thursday, May 02, 2013 - link

    Since Apple had decided in the SB days to use Intel's IGP in the 13 inch pro, there is no loss from their perspective as the new part is faster (if only a little, still faster). For the 15inch Pro, it settled on the GT650M so that is still faster than the GT3e so Intel is not making much headroom on the Apple side of things, however would certainly displace may lower-end discrete graphics on the PC side of the business. Still a win for Intel (maybe big but not huge). Yawn. The evolution is still on-going as the specialists (AMD and NV) pulls ahead strongly on their discrete chips for mobile. Gpu is a losing business for Intel but they still plot along (they felt they do not have much of a choice really).
    Also, there is no incentive to partner i3 part with GT3e since AMD Trinity and Richland will cream it and is cheaper. There is the competition and it is at the door step. This is why Intel is hyping the i7 for its superiority. Rightly so but who really need it ?!. Not most.
    Reply
  • Kjella - Friday, May 03, 2013 - link

    Losing business? Intel has been gaining market share every year, less and less people see the need for discrete graphics. APUs are what keeps AMD floating. Don't get me wrong I have always had discrete graphics since I'm a gamer but I also know I'm in a small minority. Reply
  • DesktopMan - Friday, May 03, 2013 - link

    Anandtech have been pretty clear on the fact that Apple is the one custromer pushing Intel for higher GPU performance. It would not surprise me if the 15 inch MBP drops nVidia once the Haswell refresh comes around. Lower clocked quad core for 13 inch maybe? Would be nice. Reply
  • tipoo - Friday, May 03, 2013 - link

    Not unprecedented for Apple to go backwards on GPU performance, the current low end iMac just did, the macbooks from the 320M to the HD3000 sometimes did. Reply
  • karamazovmm - Saturday, May 04, 2013 - link

    Not really you forgot that nvidia couldnt do chipsets for core i series. Reply
  • JDG1980 - Thursday, May 02, 2013 - link

    I'm curious as to what this improvement means in real-world terms. For example, I currently have a fairly modest discrete card (Radeon HD 5670) which I use for some light gaming. How will Iris Pro 5200 compare to this and other older graphics cards? At what point is a discrete card still needed? Reply
  • Spunjji - Friday, May 03, 2013 - link

    You'll need to wait for benchmarks to figure that one out for sure. 3DMark results don't scan well to real-world performance, particularly with Intel GPUs. At a guess, however, I'd say the performance should be comparable if you were prepared to accept some image quality compromises. Reply
  • Quizzical - Thursday, May 02, 2013 - link

    Only OpenGL 4.0 and not 4.3 like AMD and Nvidia offer? Is Intel ever going to try to catch up on API support? Reply
  • makerofthegames - Thursday, May 02, 2013 - link

    Does anyone see Intel moving into the discrete GPU market?

    I don't think it would be too hard for them at this point. It would basically just be a GT3 or GT3e with yet more execution units, something like 80, 120, maybe 160 EUs, and with a dedicated memory controller and memory (probably ripped from the GDDR5 of the Xeon Phi).

    Intel's definitely shown interest in desktop graphics (the aforementioned Xeon Phi was, at one point, going to be a consumer graphics card). And they have most of the parts they need now to finish one up.

    I don't think they're going to be jumping into the top-end market just yet, but I could see them continuing to eat up the low-end market, maybe competing with AMD's 77xx cards, or up to Nvidia's GTX 660.
    Reply
  • jeffkibuule - Thursday, May 02, 2013 - link

    Even if they made the part, would anyone buy it? Who would spend $200+ on an Intel GPU when their driver performance is subpar? Reply
  • makerofthegames - Thursday, May 02, 2013 - link

    Are they really that bad? They're consider by far the best on Linux, and on Windows I've heard far more complaints about AMD drivers than Intel, although that may have more to do with most people who care about graphics not using Intel. Reply
  • tipoo - Friday, May 03, 2013 - link

    "may have more to do with most people who care about graphics not using Intel."

    Yup, answered your own question. AMD drivers are a dream compared to Intel, even in 2013.
    Reply
  • Spunjji - Friday, May 03, 2013 - link

    Haha, yeah, not even the same ballpark under Windows. It's an issue of scale. AMD drivers are (on average) a little worse than nVidia (with many glaring and complicated exceptions on either side), while Intel are a LOT worse. We're talking fun issues like games not running at all and/or severe graphical glitches. Reply
  • xdrol - Thursday, May 02, 2013 - link

    That's nonsense. The market is moving towards integrated graphics, not discrete one. Reply
  • makerofthegames - Thursday, May 02, 2013 - link

    On the low end, yes. Discrete cards are quickly becoming a thing only for gamers and 3D professionals (as well as those needing heavy compute power). But that could be a more high-margin market for them, compared to integrated graphics. And with minimal R&D costs on top of the work they've already done for the iGPUs, the margins are even higher.

    It makes sense to at least try it, on the small scale. Whether it makes sense in the context of their broader strategy is beyond my ability to guess.
    Reply
  • Azethoth - Thursday, May 02, 2013 - link

    No you are missing the big picture. The MhZ race ended in mid 2000s due to heat. Now we are into multiple cores. However that only buys you something if you can in fact make a particular workload parallel. So now you are sitting there with the ability to have 16 cores due to process improvements but 4 cores were already plenty.

    The obvious answer is to bring more stuff on chip. From Memory controllers to GPU and physics chips there will be more and more non CPU stuff built in over time. It may be true today that you are better off with the GPU on a separate card. In a few years though there will be plenty of room on board. Couple that with unified GPU / CPU / whatever memory and you quickly get a single chip solution that is superior to anything discrete simply due to off chip bandwidth limitations.
    Reply
  • Haydon987 - Friday, May 03, 2013 - link

    Yes things are moving this way, but keep in mind that this "amazing ultrabook graphics performance" is approximately 7x slower than the flagship dedicated GPU already being sold by NVIDIA. While that is an improvement over previous generations, we're still at least a decade or more out from any real convergence on the high end of the GPU market.

    So right now, at this moment in time in the market, it would make more sense to pair the high end integrated GPU with the lower performance CPUs because anyone buying a 4770K, for instance, is in all probability going to also be buying a GPU closer to that flagship level of performance on the GPU side of things.
    Reply
  • rituraj - Thursday, May 02, 2013 - link

    Just release it already
    A Lenovo thinkpad helix with i7 + iris.. god I am .... oh god...
    Reply
  • superjim - Thursday, May 02, 2013 - link

    If Richland gets the estimated 20% bump over Trinity, I'm wondering what the highest end GPU we will see in Ultrabooks will be. 8670 vs GT3? Keep in mind the 35W A10-4657M is already in the 14" Vizio CT14T-B0 with a 7660g so if AMD can cram that in an ultrabook, I don't see why a 5100 wouldn't be possible. Reply
  • alfredska - Thursday, May 02, 2013 - link

    Why do we not have a "Flag as spam" link? Reply
  • nixtish - Thursday, May 02, 2013 - link

    Which one of these SKU's do you guys think would make it into the next gen rMBP ? Reply
  • Ktracho - Thursday, May 02, 2013 - link

    I would guess 28W GT3 for 13" MBP and 15W GT3 + discreet graphics for 15" MBP. Reply
  • karamazovmm - Saturday, May 04, 2013 - link

    So apple would put a lower performing part in the rmbp 13 2013 than what they had at launch? Complicated business decision Reply
  • brvnbld - Thursday, May 02, 2013 - link

    No way Intel is gonna compensate for the desktop graphics just by introducing CPU that comes with graphical components. IMO Intel is a conservative brand, AMD might not be competent with Intel in terms of performance, but price/performance ratio is good for AMD products, and i like the way AMD taking bold steps in GPU/CPU . Reply
  • HardwareDufus - Thursday, May 02, 2013 - link

    I don't mind the BGA. I haven't upgraded a CPU on a motherboard in years. Still have the same Clarksdale I5-655K on my H55N Gigabyte miniITX machine, still the same 8GB of ram. Still very pleased with the IvyBridge I7-3770K on my P877Deluxe-I Asus miniITX machine, still the same 16GB of ram.

    I'd have no objection to a Haswell I7-4770KR on a P887Deluxe-I Asus miniITX (it would be nice if Asus released a top of the line miniITX board with the Z87 chipset.. and put the top of the line Haswell with GT3e (Iris Pro or 5200..whatever you want to call it). I'd buy it and wouldn't mind one bit that the CPU was soldered.
    Reply
  • Concillian - Thursday, May 02, 2013 - link

    Have the Intel drivers gotten any better? This isn't a sarcastic comment, I haven't really looked at Intel drivers since ~Sandy Bridge or so. Drivers were terrible and it wouldn't really matter if the performance was adequate. Have they made inroads on the software front? Reply
  • tipoo - Friday, May 03, 2013 - link

    Still very slow updates. Reply
  • Torashin - Thursday, May 02, 2013 - link

    If we're going to play that game, the AMD 7660D (5800k GPU) got a score 4.3 times better than the intel HD 4000 on 3Dmark 11. So you can either claim that the Haswell GPU is 2.5 times than Ivy Bridge and still only half as good as Trinity, or it really isn't anywhere near twice as good. Either way, AMD wins. Reply
  • formulav8 - Friday, May 03, 2013 - link

    This will be Intels 4th gen or 3rd or something and the earlier 4k series still couldn't beat amds 1st gen iirc. Maybe Intel can claim they beat out AMD's 1st gen fusion or something now? Reply
  • HisDivineOrder - Thursday, May 02, 2013 - link

    It won't be long before Iris LE and Iris Lite and IrisL and Iris MX show up to help bring Iris down to the masses with the same old Intel HD graphics of old. Reply
  • Spunjji - Friday, May 03, 2013 - link

    That is, indeed, Intel's usual style. Reply
  • Jumangi - Thursday, May 02, 2013 - link

    Until Intel commits to vastly improving their pathetic driver support this announcement doesn't mean squat to me. AMD and Nvidia know that great graphics has 2 parts to it. So far Intel choses to ignore one of them. Reply
  • Frenetic Pony - Thursday, May 02, 2013 - link

    Both impressive and disappointing, nice improvement but I'm vastly disappointed in how restrictive the TDP and chip levels are. I'd love to see a 5200 or 5100 in a much lower TDP set. If a Haswell CPU can get down to a 10 watt TDP for a core i5 I don't think an underclocked 5200 in a 17 watt TDP would be too much to ask.

    Basically, a Core i5 is all I need for CPU power. My current Ivy Bridge Core i5 is as fast as I need, for games or lightroom or etc. The GPU on the other hand still struggles with anything beyond the most basic games. If Intel won't do it I have hope Nvidia will. I'd love to see a dedicated Nvidia mobile card fit into a 10 watt core i5 and a 13" ultrabook, whatever that would require.
    Reply
  • l_d_allan - Friday, May 03, 2013 - link

    Literally a "war story" ... I worked on a project in the 80's on Silicon Graphics ** Iris ** workstations doing Star Wars simulations. Smart pebbles and brilliant rocks. Hypercube supercomputer back-end. Bleeding edge. Reply
  • glugglug - Friday, May 03, 2013 - link

    The extremely limited variation of GT3e parts makes me wonder if they are planning to have super high pricing and low availability of this like with the Pentium Pro when there were yield issues having the L2 cache on chip. Reply
  • Mastadon - Friday, May 03, 2013 - link

    Please help me understand....will the top-of-the-line GT3e 5200 graphics be available on desktop motherboards? Reply
  • tipoo - Friday, May 03, 2013 - link

    On non socketed BGA parts, yes. Reply
  • Mastadon - Friday, May 03, 2013 - link

    BGA is a type of socket. You mean only on the BGA socketed CPUs with the 'R' numbered CPUs like Core i7 4770R ? I hope so. Reply
  • Azyx - Tuesday, May 07, 2013 - link

    BGA (NON-socketed) It means you will have to buy it soldered in the motherboard. That's what BGA stands for. Reply
  • hellcats - Friday, May 03, 2013 - link

    Ah, shades of Silicon Graphics (SGI). I'm staring at two SGI IRIS Indigos stacked on top of one another because I just can't bear to throw them out; they brought me so much joy. Glad to see the IRIS name associated with 3D graphics again. Reply
  • DanaGoyette - Saturday, May 04, 2013 - link

    Now, if only Intel would support 30-bit (10-bit per channel) color and 120Hz refresh rates on displays... Reply
  • karamazovmm - Saturday, May 04, 2013 - link

    a bunch of workstation and gaming machines would be thrilled by that Reply
  • Talaii - Wednesday, May 08, 2013 - link

    I've run a 120Hz display off an intel IGP before. Just have to use Displayport, since I haven't seen any intel motherboards with dual-link DVI ports. Reply
  • Polacofede - Saturday, May 04, 2013 - link

    is this a tech article or an advertisement? looks like the later to me...
    lot of missleading graphs and no real facts
    Reply
  • Azyx - Tuesday, May 07, 2013 - link

    I just can't wait to see how those new HTPC with haswell GT3e is going to be. (that was a good rhyme btw) Reply
  • Oberoth - Saturday, May 11, 2013 - link

    I will be upgrading from a very old system but want to wait for either Volcanic Islands or Maxwell for a graphics card.

    So the question is, what is faster my AMD Radeon HD 4870 512mb or the Intel HD 4600 on the 4770k?

    Next question, as i am pretty sure Intel will not have managed to compress an entire high-end (ish) video card from 2008 into a few transistors on a CPU, can i still use the OpenCL power of the Haswell chip if i have my old discrete card plugged in? I remember on early Sandy Bridge reviews you could only use QuickSync if you had a monitor plugged into Intel's video outputs.
    Reply
  • hspl - Thursday, August 15, 2013 - link

    That's what I thought as well. The performance gap definitely suggests it. And I must say that's a very high SKU number for a non Extreme line product. Reply
  • systemBuilder - Tuesday, August 05, 2014 - link

    I am pretty certain Iris was SGI's earliest brand of Workstation. Your knowledge of history is really, really sad. For 15 years post-1985, SGI rocked the graphics world, they invented OpenGL and also STL which is how most C++ programs get written quickly these days .. Reply

Log in

Don't have an account? Sign up now