The bit of eDRAM on even ultrabook parts may be one of the more exciting bits of Skylake. Should bring baseline performance up significantly, even with half the eDRAM of the Pro 5200.
That 72EU part also comes shockingly close to XBO GPU Gflop numbers, which, while not directly comparable, means integrated graphics will catch up to this gens consoles very soon.
But it's irrelevant in the real world for 5 reasons:
1) Intel's best CPUs don't focus on IGP (i.e., i7-6600K, 6700K, 5820K-5960X) which means someone who is interested in gaming is buying a dedicated i5/i7, especially K series and going for a discrete graphics card.
2) Since we are discussing PC gaming, not console gaming, a budget gamer is going to be better off getting a lower end discrete GPU like the $90 GTX750Ti or even going on the used market and buying a $100 HD7970/GTX670, instead of trying to play games on Intel's 72 EU part.
3) Looking at historical pricing of Intel's parts with eDRAM, they'll probably cost almost as much as the Xbox One/PS4.
4) No one buys an Xbox One/PS4 because they want the best graphics. If you want that, you build a Core i7 + GTX980Ti SLI/Fury X CF system. People buy consoles for ease of use, to play online with their friends, and to have exclusives. In the areas the consoles excel, a 13-15" PC laptop with a 72 EU Intel part will fail miserably in comparison to the gaming experience one would get on a PS4/XB1 + large TV in the living room. Frankly, these 2 devices aren't competing with each other.
5) Overall cost of the device - a $300 Intel CPU is worthless without a motherboard, ram, SSD/HDD, keyboard, etc. That means trying to compare how fast an Intel's CPU with 72 EUs and EDRAM is vs. an Xbox One and PS4 and ignoring the Total System Cost is misleading.
I guarantee it that anyone interested in PC gaming could care less about Intel's IGP as any serious gamer will be getting a Skylake laptop with a Maxwell and next year a Pascal GPU.
I doubt it they might be able to match the spec numbers but actual real world performance will likely still favor the console simply because of the specialized optimizations developers use on consoles vs the more generic options pc games are forced to use thanks to the near infinit hardware combinatiosn one can have
There's already a vulkan driver ready for Intel (on Linux) made by lunarg. That will allow for the optimizations needed if the developers care to make them.
Ahaha you think this thing will match an Xbox one? Wow the delusion is seriously big with some people. Also the cost of one of these high end Iris Pro CPUs alone will cost more than an entire console or a decent AMD laptop with still better graphics.
Not really, they use GCN stuff, which yes is in AMD APU's but the implementation in PS4/XBO are much larger than the ones in APU's and much closer to the ones in discreet cards.
Unfortunately, reading comprehension is not the strong suit of the average thread troll. You can couch something in caveats all day and some fool will come along and run off ranting or name calling regarding something that you already covered or that was not even part of what you said.
AMD better IGPU? LOL. The 7870k gets slaughtered by Broadwell Iris-Pro and Skylake is offering up to 50% more performance. AMD were officially made irrelevant in the only market they even had left (APUs) this year.
But haven't you heard? Broadwell's Iris Pro already surpassed AMD APU graphics performance. Couple that with CPU performance that AMD can't touch at this time means it's already better in all other aspects, except price.
You are forgeting casual gamers and macs. Currently there is no reason in getting a GPU lower than GT950M in a laptop. And in this tier a laptop can be a very good console replacement with steam desktop mode and a wireless controller. Especially with a wi-di capable screen. I am talking about 7-800$ laptops that play games from 2012 and before at any resolution. and current games at 720p.
And at 950M you are at a gaming laptop level and are above 1000$ and this is a different category.
Really? "Far better value"? First, the GPU isn't far better. It's slightly better.And that's it. Less processing power, more heat, less battery life and potentially heavier and bulkier laptop. Certainly a "casual gamer" will accept all this drawbacks for the sake of a handful extra FPS.
Yes it will be better and costs far less. These Intel chips cost way more. So go,ahead and think you will be gaming on these things without dropping a grand or more on the laptop.
"Any sort of "casual gamer" will get far better value from a AMD A10/FX laptop"
- I think you are overestimating AMD's GPU lead and underestimating the power and thermal advantages on the Intel side. I wouldn't buy an AMD chip in a laptop under any circumstances. They had some great ships back int he Thunderbird and Athlon64 days, but since ht eCOre2 came out in 2006 they are miles behind. They cant even see Intels dust at this point.
No people keep Waaayy overestimating Intels integrated graphics. They still absolutely suck for any sort of gaming. If you want a somewhat capable laptop that can do at least decent mobile gaming AMD is still the only option unless you go with a discrete GPU. That's the fact.
If your gaming plans revolve around a integrated GPU your still better served to go the AMD route.. While the CPU is not as fast it's no slouch either.. and gaming performance is going to be acceptable in comparison on most titles.
Um, first hand experience: Macbook Pro 2015, (Iris 6200): Skyrim, ESO, Civilization 5, Homeworld, all run at 1440x - I love all these people talk about intel integrated graphics sucking, meanwhile I'm getting crushed in Civ5 and kicking ass in Homeworld and ESO. I'm not lugging an integrated laptop around to play games, I have a laptop and I like to have ONE LAPTOP, and guess what, everything I've thrown on here has played. My MBA 2012 HD4000 struggled with Skyrim and Civ 5 but I still played. Please stop talking theoretical and talk about your actual rig... /end rant
@retrospooty: Core2 era was more a return to parity. One of the most even matchups I can remember was the ironically similarly numbered Phenom II 955 and the Core 2 Quad 9550. Nahalem is what really did the damage. Here's hoping Zen can put AMD back in the ballpark.
I do think AMD has a pretty significant GPU advantage in the area of gaming over Intel. However, as you've stated, the power/thermal constraints do not allow them to fully exploit this advantage. A CPU intense game, even if not CPU limited, will chew up much of the GPU's available thermal envelop, effectively eliminating any advantage AMD had. Granted, there are cases where the thermal solutions in play provide the necessary thermal headroom, but these are mostly found in laptops that are already using discrete chips.
The Phenom II didn't come out until after Intel had retired the Core 2 line. Everyone wants AMD to be competitive but the fact is they are miles behind Intel.
Guess you didn't read the review of Broadwell Iris Pro on this very site. AMD's GPU loses by as much as 20-30% in most games vs Broadwell Iris Pro. Skylake Iris Pro will be offering up to 50% more performance.
4: Not everybody who are interested in a gaming machine can afford a Core i7 and several 1000$ graphic cards in a SLI configuration. A lot of gamers have a budget between 500$-1000$, and if you can get/get close to XB1 performance with just an Intel IGP, it would be perfect for that kind of budget.
Also: Why would you think a 13' laptop with Iris Pro and 72 execution units would "fail miserably" in comparison with an XB1/PS4?!?
That's ridiculous. Any advantage the console would have is tiny.
Just get two wireless controllers and hook up the laptop to your HDTV with a HDMI cable, and the experience would be close to identical....
"Also: Why would you think a 13' laptop with Iris Pro and 72 execution units would "fail miserably" in comparison with an XB1/PS4?!?"
Because he specifically mentioned this in conjunction with "user experience". The PC gives you freedom but certainly not the ease of use of a console. Which is mainly why these things exist at all.
Lolz if you think an Intel only machine with any sort of Integrated graphics(even the best Iris Pro) will give you anything close to an Xbox One game your seriously naive and ignorant. Stop looking at theoretical Gflops numbers to make comparisons.
Well, a few posts back up you're stating that AMD's A10 APU have "far better graphics" when it failed to beat last generation Iris 5200 GPU and now there you are, talking about naiveness and ignorance.
Compare actual gaming on the two mr naive one. also compare the huge cost differences of these chips. An Iris Pro laptop will be far far more expensive.
Is it your love of AMD that makes you say this? Think about it. The XB1 uses DDR3 for its GPU. This will use DDR4. The XB1 has a small eDRAM cache. Skylake has a small eDRAM cache. The XB1 has a very weak AMD Jaguar based CPU. This will have a much stronger Skylake based CPU.
So why is it so far fetched to think that Skylake could get close to matching the XB1? It wont outright beat it, not this one maybe the next one, but it could get close with proper optimizations and DX12.
I didn't say it was a good value. Just interesting how times have changed, that Intel integrated graphics are this close to a two year old console already.
"I guarantee it that anyone interested in PC gaming could care less about Intel's IGP as any serious gamer will be getting a Skylake laptop with a Maxwell and next year a Pascal GPU."
I would argue that anyone interested in PC gaming will avoid laptops like the plague and buy/build a desktop PC so they can replace graphics/ram/CPU easily and pay a lot less for a DX12 card, and on that note, anyone wanting to build a DX12-ready gaming machine right now will be getting a Radeon 290/390(X) series card and skipping Maxwell altogether, as it doesn't support hardware asynchronous shaders.
Well, when the Macbook gets it, you can stream your screen to the Apple TV connect an Xbox One/PS4 controller and play like you're on console. Having similar graphics and at the same time a computer for school etc. But of course these devices are not competitors to consoles, it's just interesting what is possible.
You actually make a great point. Despite the fact that on a desktop an i5 paired with a $200 dollar gpu will crush integrated graphics, on a laptop a 72 EU cpu could do some serious work. This paired with ddr4 could kicked integrated graphics up a notch, which is good for everyone, as it raises the lowest common denominator.
Like you say, it probably won't be long until integrated graphics catch up with the Xbone, especially as they have a CPU advantage in many cases, and with ddr4 they have VERY similar system memory. It'll be a few more years after that til ps4 is caught up with. I would add that tablets will probably catch the xbone before the end of this generation. It could be an interesting future, where games could come to tablet, pc, and consoles simultaneously.
"... as it raises the lowest common denominator." That's the important bit. One reason there aren't more PC gamers is simply that there aren't that many people who have modern PCs powerful enough to run today's games. This limits the technical ambition of PC games as developers have to keep in mind the wider PC audience and not just us tech enthusiasts. If integrated graphics can continue improving generation to generation, in a few years time even $600 laptops will be capable of running games at comparable fidelity to the Xbox One. Adding substantive amounts of eDRAM to all integrated GPUs would go a long ways towards making that dream a reality.
I am hoping to replace my Arrandale laptop with an ultrabook, and really hope that the 15w or 28w Iris with eDRAM can give me something with a high resolution display and smoother running UI than Retina Haswell/Broadwell.
"Intel’s graphics topology consists of an ‘unslice’ (or slice common) that deals solely with command streaming, vertex fetching, tessellation, domain shading, geometry shaders, and a thread dispatcher. "
This part of their architecture seemed like the weak spot which led to little scaling between 1/2/3 slices going from DX11 to DX12. So will that remain the same with Skylake, or are there other differences that will allow better scaling with DX12?
The unslice is not the same as slice common. The Unslice is whats not in the slices, obviously, but the 'Slice Common' is what IS in the slice, but which ISNT the EU's themselves..
Apple appears to take the Core M and use it in a 7 W cTDP up configuration in the Retina MacBook. I wonder if the increase in performance would be worth the increased heat and power consumption to use U-class processors in a 7.5 W cTDP down configuration instead? Or even try to accommodate 9.5 W cTDP down U-class processors to take advantage of the GT3e to better drive the retina display?
FIVR: "For Skylake-Y, this also resulted in an increase for the z-height of the processor due to having leveling transistors on the rear of the package."
Doubtful. You might have seen a 6700K model briefly for a moment before it instantly sold out again, but you haven't seen a 6700 non-K model since their launch just happened (but the SKU isn't even in Amazon or Newegg yet).
I find it surprising that the reviewer failed to mention anything about the new quad core/quad threaded mobile i5s. To me these are one of the most, if not the most interesting new SKUs. Also it would be nice if these was if there was a specific new article investigating the efficiency gains from Skylake. From the Skylake K review it seemed that the 6700k consumed too much power for the performance improvements it gave. Maybe it was due to the much higher voltages. In any case it would be interesting to see a follow up article. Cheers.
Well, for me the mobile quad's are totally uninteresting, be it i5 or i7. I'm rather interested in what you can get out of the 6700 iwth a Z170 board.
Anyway, I second the request for an efficiency investigation. The 6700K started at insanely high stock voltages, so now onder it's not better than Haswell in this regard. AT also showed some numbers with optimal core voltage, but those start at 4.3 GHz and 1.20V. Which says nothing about any of the other chips, even in stock configurations (i.e. without undervolting).
I also add my request. Based on Intel's comments, those tests likely need to be redesigned to approximate real world usage, not user tasks, then recorded and played back as fast as possible. I'd love to see efficiency from typical desktop usage, gamer, home theater, and "enthusiast". I'm sure you have several users that recordings could be taken from and played back to simulate these. Yes, the tests would take longer to perform, but it appears that's going to be a requirement to achieve accurate efficiency tests at this point.
Measuring response times might be crucial for such tests, as the throughput might not be the most important metric any more. All tasks will have finished before the end of the benchmark, if played back slowly enough.
Hard to care at this point given how little they offer and at what prices. The core does seem to be rather big and far too big for low W- perf per area is very very low there. Core M is still insanely priced given how little it does. Was wondering if they'll do a hard price cut ahead of A72 SoCs since at that point it will be a lot easier for folks to realize how absurd Intel's pricing is.. A few days ago noticed a little board with a quad A7, 1GB RAM and plenty of connectors for just 15$. That kind of computing device makes you wonder about where the world would be if we weren't stuck on Wintel. The premium this monopoly adds to PCs is heartbreaking at this point. At least in mobile things are ok for now.
Core m is on the order of 10x the performance of a Quad A7. Core m is also a pretty small die, it is a similar die size to mobile SoC's. Not sure what you are talking about ... it's a different product for a different price.
I'm pretty sure Intel intentionally gimped the thermal performance with inferior TIM underneath the heat spreader. I believe this is a ploy to allow them to release a part with improved TIM some time down the road as a new, higher clocking revision. I think this is a way to gate performance intentionally so that the can sell essentially the same part over several releases.
Without significant competition from AMD, this is the kind of thing that Intel can unfortunately get away with.
You guys are ignoring the G4500 on desktop, its marked as "HD530", thats either GT2 or GT1.5, since an early july drivers shows desktop name for GT1.5 as "HD530".
"Interestingly enough, you could still have five native gigabit Ethernet controllers on [Skylake-Y]."
I think that's wrong. You will notice that all other controllers have numbers. For example, for Skylake-U, there is SATA #0, SATA #1, and SATA #2, with SATA #1 appearing twice because you have a choice of which pins that controller is connected to. When GbE appears in the chart, there is no number. I'd suggest that is because there is only a single Gb Ethernet controller, with several different choices of which pins the Ethernet controller is connected to.
Will we see some performance reviews of Z170 motherboards any time soon, (including POST timings)? I've been looking forward to upgrading to Skylake for a while now, but I'm not going to pick a motherboard before I know which one to go for.
Yeah, I'd like to see MUCH more comment and examination on the reasons for this. It may be being spun as something to prevent malware from attacking other legitimate processes, but it's equally a box of delights for malware of any kind if it can insert itself into a protected zone and have all its code and data safe from any debugging or monitoring. And when I say malware of any kind, NSA back door is indeed what screams loudest to me. With MS preventing users from refusing individual updates to Windows 10, that's a mass surveillance perfect storm just waiting to happen down the line unless there is intense scrutiny of any use of this.
The risk in this single feature could be the deciding factor for me not to move beyond Haswell-E with Intel.
I'd want to know that as well. And is there still "multicore optimization", i.e. max turbo freqeuncy for all cores? With an increased power budget this would make the 6700 fly.. while staying in an energy efficient regime, in contrast to full-throttle OC of the 6700K.
Even if its limited to Z chipsets, its still great. I'm sure if that is actually possible, some mobo maker will come up with cheapish Z boards (like asrock).
Of course they will.. just as you go out to buy your 6700K in a year or two they will release a 6790K /w better thermal dynamics.. (Let's not hold our breath on 6 Core puppies.. Intel doesn't seem interested as long as they have no competition)
Any word on AVX-512, especially on those mobile Xeons? The same question applies to Xeons for the desktop socket. I suspect they'll simply be the same die, so if they have it Intel would be deactivating it for all regular CPUs. The other option would be: not all Xeons have, and the big dies actually have modified cores (as Intel hinted at IDF, without giving any details).
Oh, and in the conclusions page you are mentioning 4+3 parts. From the article I understood there are only going to be 4+4 (128MB) parts and 2+3 (64MB) parts.
Later are available as both 15 and 28W parts, though it is quite possible that the GPU performance takes quite a hit with the lower TDP. I hope you'll get to do comparisons between different GPU SKU's.
I have my doubts about how well the 72 EU graphic will bring about significant performance improvements. Looking at the trend of integrated graphics by Intel over the last 2 gen, between a 24 and 48 EU solution, the improvements are marginal in most cases. I believe its limited by 2 things, memory bandwidth, and most importantly, the amount of power it can draw. Memory bandwidth can be rectified with the e-DRAM, but still its limited by power. I feel the difference between Skylake and Broadwell graphics is negligible if we compare the same class of processor.
I'm failing to see how transistor counts and die size mean anything to anyone else. "Hey guys, Intel has 2 billion transistors in their CPU, we should make ours have 2 billion and 1! That'll show them!"
If you know a little bit about their manufacturing/fabs (FINFets, double, triple or quadruple patterning?) plus how many transistors they have in their CPU. And you know the exact W*L and total area in square mm, then it's fairly trivial to come up with some pretty good guesses about their financial model.
How many CPUs can they make in a 300mm wafer in a day? How many of a given batch do they need to throw out, and how many can they repurpose towards lower priced parts? In other words, what is Intels real cost for manufacturing a given part, and how cheap can they sell it and still make a profit?
Part of me still doesn't believe that info should be locked away (and Anand already found the die size so you can get a good enough estimate with that and previous numbers).
At the end of the day, the only thing the consumer cares about is how fast it performs. The other numbers are just for pissing contests.
Actually, it seems that power consumption is the only thing that matters to consumers, even on the desktop. All this talk about AMD's lack of competition being the reason why we aren't seeing meaningful generational performance improvements is just that: talk.
The real thing that hampers performance progress is consumers' plain refusal to upgrade for performance reasons (even a doubling in performance is not economically viable to produce since no one, except for me it seems, will buy it). Consumers only buy the lowest power system that they can afford. It has nothing to do with AMD. Even if AMD released a CPU that is 4x faster than piledriver, it wouldn't change Intel's priority (nor would it help AMD's sales...).
Sorry for my tone , but "I'm failing to see", how transistor count don't mean more to consumers than to anyone else. So, after 10 years of blissful carelessness(because duuude it's user experience dat matters, ugh..), you will have everyone deceiving you on what they offer on the price point they offer. Very convenient, especially if they are not able to sustain an exponential increase in performance and passing to the next paradigm to achieve it.
Because untill very recently we have been seeing mostly healthy practices, despite the fact that you could always meet people pointing to big or small sins. Big example, What's the need of an igp on a processor that consumes 90 watts, especially a gpu that is tragically subpar? To hide the fact they have nothing more to offer to the consumer, cpu dependent, at 90 watts(at the current market situation) and have an excuse for charging more on a theoretically higher consuming and "higher performing" cpu? Because, what bugs me is what if 6700k lacked the igp? Would it perform better without a useless igp dragging it down? I really don't know, but I feel it wouldn't. Regarding the mobile solutions and the money and energy limited devices, the igp could really prove to be useful to a lot of people, without overloading their device with a clunky, lowly, discrete gpu.
Yes, it would perform exaclty the same (if the iGPU is not used, otherwise it needs memory bandwidth). But the chip would run hotter since it would be a lot smaller. Si is not the best thermal conductor, but the presence of the iGPU spreads the other heat producers a bit.
Thermodynamics "work" and don't care if they're being applied to an IC or a metal brick. Silicon is a far better heat conductor than air, so even if the GPU is not used, it will transfer some of the heat from the CPU + Uncore to the heat spreader.
My comment was a bit stupid, though, in the way that given how tightly packed the CPU cores and the uncore are, the GPU spreads none of them further apart from each other. It could have been designed like that, but according to the picture on one of first few pages it's not.
No, it wouldn't. You could easily spread out the cores by padding them with much more cache and doubling their speculative and parallel execution capabilities. If you up the power available for such out of order execution, the additional die space could easily result in 50% more IPC throughput.
50% IPC increase? Go ahead and save AMD, then! They've been trying that for years with probably billions of R&D budget (accumulated over the years), yet their FX CPUs with huge L3 don't perform significantly better than the APUs with similar CPU cores and no L3 at all.
Yes, but I specifically mentioned using that extra cache to feed the greater amount of speculative execution units made available by the removal of the iGPU.
Sadly, AMD can't use this strategy because Global Foundaries' and TSMC's manufacturing technology cannot fit the same amount of transistors into a given area, as Intel's can. Furthermore, their yields for large dies are also quite a bit lower and AMD really doesn't have the monetary reserves to produce such a high-risk chip.
Also, the largest fraction of that R&D budget went into developing smaller, cheaper and lower power processors to try and enter the mobile market, while almost all of the rest went into sacrificing single threaded design (such as improving and relying more on out of order execution, branch prediction and speculative execution) to design Bulldozer-like, multi-core CPUs (which sacrifice a large portion of die area, that could have been used to make a low amount of very fast cores, to implement a large number of slow cores).
Lastly, I didn't just refer to L3 cache when I suggested using some of the free space left behind by the removal of the iGPU to increase the amount of cache. The L1 and L2 caches could have been made much larger, with more associativity to further reduce the amount and duration of pipeline stalls, due to not having a data dependancy in the cache. Also, while it is true that the L3 cache did not make much of a difference in the example you posted, its also equally true that cache performance becomes increasingly important as a CPU's data processing throughput increases. Modern CPU caches just seem to have stagnated (aside from some bandwidth inprovements every now and then), because our CPU cores haven't seen that much of a performance upgrade since the last time the caches have been improved. Once a CPU gets the required power and transistor budgets for improved out of order performance, the cache will need to be large enough to hold all the different datasets that a single core is working on at the same time (which is not a form a multi-threading in case you were wondering), while also being fast enough to service all of those units at once, without adversely affecting any one set of calculations.
Your representation of Skylake's CPU/IPC performance is inaccurate and incomplete due to the use of the slowest DDR4 memory available. Given the nature of DDR4 (high bandwidth, high latency), it is an absolute necessity to pair the CPU with high clockspeed memory to mitigate the latency impairment. Other sites have tested with faster memory and seen a much larger difference between Haswell and Skylake. See Hardocp's review, (the gaming section specifically) as well as Techspot's review (page 13, memory speed comparison). Hardocp shows Haswell with 1866 RAM is actually faster than Skylake with 2133 RAM in Unigine Heaven and Bioshock Infinite @ lowest quality settings (to create a CPU bottleneck). I find Techspot's article particularly interesting in that they actually tested both platforms with fast RAM. In synthetic testing (Sandra 2015) Haswell with 2400 DDR3 has more memory bandwidth than Skylake with 2666 DDR4, it is not until you pair Skylake with 3000 DDR4 that it achieves more memory bandwidth than Haswell with 2400 DDR3. You can see here directly the impact that latency has, even on bandwidth and not just overall performance. Furthermore in their testing, Haswell with 2400 RAM vs. Skylake with 3000 RAM shows Haswell being faster in Cinebench R15 multi-threaded test (895 vs. 892). Their 7-zip testing has Haswell leading both Skylake configurations in a memory-bound workload (32MB dictionary) in terms of instructions per second. Finally, in a custom Photoshop workload Haswell's performance is once again sandwiched between the two Skylake configurations.
Clearly both Haswell and Skylake benefit from faster memory. In fact, Skylake should ideally be paired with > 3000 DDR4 as there are still scenarios in which it is slower than Haswell with 2400 DDR3 due to latency differences.
Enthusiasts are also far more likely to buy faster memory than the literal slowest memory available for the platform, given the minimal price difference. Right now on Newegg one can purchase a 16GB DDR3 2400 kit (2x8) for $90, a mere $10 more than an 1866 16GB kit. With DDR4 the situation is only slightly worse. The cheapest 16GB (2x8) 2133 DDR4 kit is $110, and 3000 goes for $135. It is also important to note that these kits have the same (primary) timings with a CAS latency of 15.
So now we come to your reasoning for pairing Skylake with such slow RAM, and that of other reviewers, as you are not the only one to have done this. Intel only qualified Skylake with DDR4 up to 2133 MT/s. Why did they do this? To save time and money during the qualification stage leading up to Skylake's release. It is not because Skylake will not work with faster RAM, there isn't an unlocked Skylake chip in existence that is incapable of operating with at least 3000 RAM speed, and some significantly higher. Hardocp was able to test their Skylake sample (with no reports of crashing or errors) with the fastest DDR4 currently available today, 3600 MT/s. I have also heard anecdotally from enthusiasts with multiple samples that DDR4 3400-3600 seems to be the sweet spot for memory performance on Skylake.
In conclusion, your testing method is improperly formed, when considered from the perspective of an enthusiast whose desire is to obtain the most performance from Skylake without over-spending. Now, if you believe your target audience is not in fact the PC enthusiast but instead a wider "mainstream" audience, I think the technical content of your articles easily belies this notion.
The primary reason Skylake is only qualified for DDR4-2133 is that there is no faster JEDEC standard yet. That's also the reason AT tested at that, I think. However, I agree: restraining an "enthusiast OC CPU" like that seems silly, especially given the very small price premium of significantly faster DDR4. I hope some memory scaling analysis is included in future Skylake articles.
Good point, you are correct in that this is the root cause, my analysis suggests the reason is one step further down (or up, depending on your view) the ladder.
Someone correct me if I'm wrong here, but I believe that is a motherboard feature. As long as you're using a Z series chipset-based motherboard, you *should* be able to run faster memory.
Sounds plausible.. but noone is talking about that. Damn it! Not even a single review of the regular 6700, and it has been on the shelves in better quantities than the 6700K for 2-3 days (Germany).
Actually, non K Haswell like Core i7 4790 using a Z chipest can most of the times reach DDR3-2133MHz with no problems. There is only multiplier lock from Intel, not memory lock to 1600MHz.
Actually, enthusiasts only focus on achieving the lowest possible system power. It seems that most people wouldn't mind further gimping Skylake's performance by underclocking DDR4-2133 to the lowest allowed setting, in the name of needless desktop power savings.
Yea I'm sure there are a lot of readons why we are in this situation (buying a $100 CPU and then having to spend more than that on electricity costs seems a little silly if you are on such a tight budget to begin with); but I still find it increadibly sad that, even if you are willing to spend $1000 on the best CPU performance that money can buy (Xeons aren't overclockable, so a heavily overclocked consumer CPU is normally able to beat almost all single-socket workstations, even in heavily multi-threaded workloads), you are still constrained by some arbitrary power limit.
Being willing to spend your hard earned cash on $1000~$2000 of power bills per year still doesn't change the fact that you cannot, and will NEVER be able to, build a system that can make proper use of that amount of power for maximum high-framerate gaming performance. (Ps. I currently live in South Africa, where our electricity costs are pretty much on par with, and sometimes (if your skin color, like mine, isn't black) even greatly exceeds those of Germany. Heck, once you pass
I expect that most of the mobile Xeon will end up in an embedded system which needs the reliability of ECC, such as wireless core network, including functions such as transcoding which can use the GPU for audio/visual processing. The GPU actually makes for a pretty effective DSP, although not necessarily as power-efficient as a DSP.
A lot of CPU'S formerly Broadwell are in very short supply and often the suppliers are changing much more for them than they are worth. This is particularly true about IRIS graphics on the desktop.
At this point the only interest I have is for a cheap, ultra efficient media server to fill out my new, as-yet-unused mini-itx case. My Sandy bridge i5 is still doing quite well for my power machine. When the lower end stuff comes out I'll take a look.
Yeah that's because they are starting with a MUCH MUCH less refined architecture. It's easy to improve a ton on something that isnt as good to begin with... Plus, there is only so much you can do, you quickly run into diminishing returns.
This is seriously like one of the simplest concepts ever but people still don't seem to get it....
"extide: Plus, there is only so much you can do, you quickly run into diminishing returns."
that's a subjective POV, if you where to remove the base sram and dram and replace them with 10s of femto seconds Correlated Electron RAM and/or Mram in both sram and new wideIO2 configurations for these Arm invested NVram and role them out in the usual arm/foundry collaborations then you begin to see the latest advertised Intel far slower new "3D XPoint" as a sub standard technology in comparison .... http://www.zdnet.com/article/arm-licenses-next-gen...
"For users who actively want an LGA1151 4+4e configuration, make sure your Intel representative knows it, because customer requests travel up the chain."
Who do I need to talk to? Seriously I didn't get Broadwell because I knew Skylake was right around the corner. I mean why introduce a pocketable 5x5 platform, just to announce that you have no plans to actually release the perfect processor for that platform?
"For Skylake-U/Y, these processors are not typically paired with discrete graphics and as far as we can tell, the PCIe lanes have been removed from these lines. As a result, any storage based on PCIe (such as M.2) for devices based on these processors will be using the chipset PCIe lanes."
According to Intel Ark, the 15W U-series CPUs (at least the i5s and i7s (including the Iris 6650U), which I looked at) have 12 PCIe 3.0 lanes, available in "1x4, 2x2, 1x2+2x1 and 4x1" configurations. Worth updating the article?
And reading on, I suddenly realize why you said what you did. 12 lanes does indeed line up with the ones from the PCH-LP. Does this point toward more of an SOC-like integration of features for U-/Y-series CPUs?
"A lot of these improvements are aimed directly at the pure-performance perspective (except L2 and FMUL to an extent), so you really have to be gunning it or have a specific workload to take advantage."
i cant believe that to be true, as its a tock and yet no real world view can call this tock an improvement never mind "so you really have to be gunning it or have a specific workload to take advantage." as the real world x264/x265 show no benefit what so ever here....
also Ian, was it an oversight on your part that in all the 9 pages analysis you did not point out the missing generic "AVX2 SIMD" in most of all these launched today.... please note that the official Intel slides pacifically remove any mention of any AVX SIMD in their latest charts etc.
it seems a clear cut choice on intels part to try and stop news outlets from mentioning and pointing out the lack of 2015 class SIMD on many of these soc released today.....
can you at least go through the included charts and point out all the cores/soc that Do Not include generic AVX2 SIMD to make it clear which cores/soc to actually buy (anything with AVX2+) and what new/old soc to discard (anything with only antiquated 2006 sse4 SIMD)
Actually, consumers will actively avoid AVX2 instruction set capable processors, since they could use more power (especially on the desktop, where Intel's power limiter allows AVX2 to really boost application performance / power consumption)
i dont see any logic to your "consumers will actively avoid AVX2 instruction set " comment as by definition "SIMD" ( Single Instruction, Multiple Data) describes computers with multiple processing elements that perform the same operation on multiple data points simultaneously.
so in fact AVX2 256 bit SIMD does exactly the opposite of wasting more power compared to the older/slower data paths today, also its clear in the mobile/ultra low power devices where "Qualcomm's new DSP technology boasts heavy vector engine—that it calls Hexagon Vector eXtensions or HVX—for compute-intensive workloads in computational photography, computer vision, virtual reality and photo-realistic graphics on mobile devices. Moreover, it expands single instruction multiple data (SIMD) from 64-bit to 1,024-bit in order to carry out image processing with a wider vector capability...." is in fact doubling even intel's as yet unreleased 512bit AVX3 , with their lowest power 1,024-bit SIMD to date, although its unknown weather its a refreshed NEON or another complimentary SIMD there... we shall see soon enough.
The lineup looks pretty good but honestly following x86 processors has been pretty boring since Sandy Bridge as each new generation brings only pretty mundane evolutionary changes. Fingers crossed that AMD can actually bring something competitive to market with Zen and shake things up a little as under-performing Bulldozer and it derivatives have really let Intel rest on it's laurels.
The article says that vPro enabled processors were NOT launched. However, all the following processors are vPro enabled and were launched at the event:
Well, a decrease in performance from cannonlake to skylake would be correct. However, I assume you mean haswell, not cannonlake, and that is probably be due to the L2/FMUL changes. However you are also looking at chips with different clockspeeds, with haswell having a faster clock so that also contributes to this result.
It is somewhat disappointing that Intel has decided to make changes that significantly favor power consumption over performance.
I have a feeling the Xeons will not have these same changes, so it will be interesting to see what the Skylake E5's are like...
Mmm, I have the very opposite feeling: I think that these changes were done explicitly to the benefit of server and mobile chips. These two categories (server and mobile) are greatly limited by their power usage (and by their ability to effectively remove the generated heart), while are only marginally dependent on FPU performance.
So trading some performance for improved power efficiency suddenly make a lot of sense, especially if Intel want to continually increase Xeon's for number (and it seems so).
I'd be interested to know what its relative performance is vs a discrete card like 750ti when it comes to the SM5.0 version of NNEDI3 with MPDN. Intel GPUs surprisingly run twice as fast with the shader version as compared to the OpenCL version (AMD loves shader too - the only exception here is NVIDIA's Maxwell architecture). It'll be interesting to see if Skylake is the perfect HTPC!
If I understand pg 3 slide correctly , eDRAM will only be for BGA - and thus no Iris iGPU for desktop, broadwell chips may be a bit faster for those not needing a GPU for gaming and similar.
Hm, is this a paper launch only? Only available parts until now are 6600K and 6700K (all czech big e-shops as well as ones like newegg). Awaiting 6100T eagerly (want to build mITX baby for my mom since her old (ancient) computer died 2 weeks ago)... and for obvious reasons I'd rather prefer new platform than the old one in case there was ever need to upgrade something (which I doubt but still...)
I feel like there's something missing here. We get 15w dual-core parts with Iris GT3e, but quad-core parts are all 45w with no GT3e. Indeed, there's no quad-core mobile chips with Iris graphics although Broadwell and Haswell both had them in the 45w quad-core range. There's certainly no issue fitting it in the power envelope, given you can literally fit 3x 2-core chips with GT3e into the 45w TDP.
I like to have a laptop for its portability and am not willing to buy a second system for my occasional gaming. In my experience, games like civ 5 , civ be and skyrim are happy with two processors but would like more graphics power than my current laptop. (i7-4700MQ with no additional graphics chips)...
To my surprise, I find that the H series of processors have less graphics power than the U series. I suspect that the U series 2 processors, 4 threads would be just fine for the games I play and I know they would like the additional graphics power. So I'm likely to be looking at the U series as I look at replacements for my current laptop, not the H series as I expected.
I'm curious if others reach that conclusion as well.... and am looking forward to anandtech's future comparisons between the H and U series graphics capabilities.
> On the other side of the coin, the FMUL (floating point multiply) has increased in latency over Broadwell, and returned to the same as Haswell. We are told that this is due to design decisions that allows for better performance when it comes to creating enterprise silicon, which is an interesting explanation in itself.
they mean AVX-512 support, that required microarchitecture changes unfavorable for old avx-256 code
Wow, in most respects, GPU FLOPs performance peaked in Haswell generation, and is barely back to the same levels in Skylake! They have just been trying to cut power consumption 3 years of no progress in GPU throughput!
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
173 Comments
Back to Article
tipoo - Tuesday, September 1, 2015 - link
The bit of eDRAM on even ultrabook parts may be one of the more exciting bits of Skylake. Should bring baseline performance up significantly, even with half the eDRAM of the Pro 5200.tipoo - Tuesday, September 1, 2015 - link
That 72EU part also comes shockingly close to XBO GPU Gflop numbers, which, while not directly comparable, means integrated graphics will catch up to this gens consoles very soon.
RussianSensation - Wednesday, September 2, 2015 - link
But it's irrelevant in the real world for 5 reasons:1) Intel's best CPUs don't focus on IGP (i.e., i7-6600K, 6700K, 5820K-5960X) which means someone who is interested in gaming is buying a dedicated i5/i7, especially K series and going for a discrete graphics card.
2) Since we are discussing PC gaming, not console gaming, a budget gamer is going to be better off getting a lower end discrete GPU like the $90 GTX750Ti or even going on the used market and buying a $100 HD7970/GTX670, instead of trying to play games on Intel's 72 EU part.
3) Looking at historical pricing of Intel's parts with eDRAM, they'll probably cost almost as much as the Xbox One/PS4.
4) No one buys an Xbox One/PS4 because they want the best graphics. If you want that, you build a Core i7 + GTX980Ti SLI/Fury X CF system. People buy consoles for ease of use, to play online with their friends, and to have exclusives. In the areas the consoles excel, a 13-15" PC laptop with a 72 EU Intel part will fail miserably in comparison to the gaming experience one would get on a PS4/XB1 + large TV in the living room. Frankly, these 2 devices aren't competing with each other.
5) Overall cost of the device - a $300 Intel CPU is worthless without a motherboard, ram, SSD/HDD, keyboard, etc. That means trying to compare how fast an Intel's CPU with 72 EUs and EDRAM is vs. an Xbox One and PS4 and ignoring the Total System Cost is misleading.
I guarantee it that anyone interested in PC gaming could care less about Intel's IGP as any serious gamer will be getting a Skylake laptop with a Maxwell and next year a Pascal GPU.
HideOut - Wednesday, September 2, 2015 - link
No where in his comment did he mention same performance and cost. He was merely making an observation.Jumangi - Wednesday, September 2, 2015 - link
That Intels Integrated graphics can finally match a ten year old console? Big deal...IanHagen - Wednesday, September 2, 2015 - link
No, that it matches a console released last year.SunLord - Wednesday, September 2, 2015 - link
I doubt it they might be able to match the spec numbers but actual real world performance will likely still favor the console simply because of the specialized optimizations developers use on consoles vs the more generic options pc games are forced to use thanks to the near infinit hardware combinatiosn one can havetuxRoller - Sunday, September 6, 2015 - link
There's already a vulkan driver ready for Intel (on Linux) made by lunarg. That will allow for the optimizations needed if the developers care to make them.Jumangi - Wednesday, September 2, 2015 - link
Ahaha you think this thing will match an Xbox one? Wow the delusion is seriously big with some people. Also the cost of one of these high end Iris Pro CPUs alone will cost more than an entire console or a decent AMD laptop with still better graphics.BillyONeal - Wednesday, September 2, 2015 - link
Considering both the PS4 and XBO also use integrated graphics solutions from a couple years ago it isn't far fetched.extide - Wednesday, September 2, 2015 - link
Not really, they use GCN stuff, which yes is in AMD APU's but the implementation in PS4/XBO are much larger than the ones in APU's and much closer to the ones in discreet cards.tipoo - Friday, September 4, 2015 - link
All I said was it comes close to it iin raw Gflops on the GPU. Wow the reading ability of some people.masouth - Friday, September 4, 2015 - link
Unfortunately, reading comprehension is not the strong suit of the average thread troll. You can couch something in caveats all day and some fool will come along and run off ranting or name calling regarding something that you already covered or that was not even part of what you said.MapRef41N93W - Friday, September 4, 2015 - link
AMD better IGPU? LOL. The 7870k gets slaughtered by Broadwell Iris-Pro and Skylake is offering up to 50% more performance. AMD were officially made irrelevant in the only market they even had left (APUs) this year.MapRef41N93W - Friday, September 4, 2015 - link
And the proof is right here on this very site http://anandtech.com/bench/product/1497?vs=15007870k losing by 20-30% on average vs Broadwell i5 in iGPU tests.
MapRef41N93W - Friday, September 4, 2015 - link
Inb4 the AMD shill starts bringing up the price difference like the only reason you buy a CPU is for the iGPU.FourEyedGeek - Wednesday, September 9, 2015 - link
The 7870 in a PlayStation 4 isn't that powerful, wasn't on release and certainly isn't now.InquisitorDavid - Saturday, September 12, 2015 - link
More expensive, yes.But haven't you heard? Broadwell's Iris Pro already surpassed AMD APU graphics performance. Couple that with CPU performance that AMD can't touch at this time means it's already better in all other aspects, except price.
Notmyusualid - Wednesday, September 2, 2015 - link
Nonsense.Pissedoffyouth - Wednesday, September 2, 2015 - link
this is a big deal for me, building the most compact mini PC's to hide behind my monitor and play at 1080P.tipoo - Wednesday, September 2, 2015 - link
Two year old, you're only off by 8 years though.FourEyedGeek - Wednesday, September 9, 2015 - link
They were referring to XBox One, pay attention.Braincruser - Wednesday, September 2, 2015 - link
You are forgeting casual gamers and macs. Currently there is no reason in getting a GPU lower than GT950M in a laptop. And in this tier a laptop can be a very good console replacement with steam desktop mode and a wireless controller. Especially with a wi-di capable screen.I am talking about 7-800$ laptops that play games from 2012 and before at any resolution. and current games at 720p.
And at 950M you are at a gaming laptop level and are above 1000$ and this is a different category.
Jumangi - Wednesday, September 2, 2015 - link
Any sort of "casual gamer" will get far better value from a AMD A10/FX laptop. Good enough CPU power and far better GPU than anything Intel does.IanHagen - Wednesday, September 2, 2015 - link
Really? "Far better value"? First, the GPU isn't far better. It's slightly better.And that's it. Less processing power, more heat, less battery life and potentially heavier and bulkier laptop. Certainly a "casual gamer" will accept all this drawbacks for the sake of a handful extra FPS.Jumangi - Wednesday, September 2, 2015 - link
Yes it will be better and costs far less. These Intel chips cost way more. So go,ahead and think you will be gaming on these things without dropping a grand or more on the laptop.Notmyusualid - Wednesday, September 2, 2015 - link
Hmm, really?Did you see how Intel have come on recently in IGPs?
And some games are quite CPU dependent too.
I'll have to politely disagree with you.
retrospooty - Wednesday, September 2, 2015 - link
"Any sort of "casual gamer" will get far better value from a AMD A10/FX laptop"- I think you are overestimating AMD's GPU lead and underestimating the power and thermal advantages on the Intel side. I wouldn't buy an AMD chip in a laptop under any circumstances. They had some great ships back int he Thunderbird and Athlon64 days, but since ht eCOre2 came out in 2006 they are miles behind. They cant even see Intels dust at this point.
Jumangi - Wednesday, September 2, 2015 - link
No people keep Waaayy overestimating Intels integrated graphics. They still absolutely suck for any sort of gaming. If you want a somewhat capable laptop that can do at least decent mobile gaming AMD is still the only option unless you go with a discrete GPU. That's the fact.extide - Wednesday, September 2, 2015 - link
That's mostly because there haven't really been any Iris Pro SKU's for gaming laptops -- looks like that is going to change though..just4U - Wednesday, September 2, 2015 - link
I have to agree with Jumangi,If your gaming plans revolve around a integrated GPU your still better served to go the AMD route.. While the CPU is not as fast it's no slouch either.. and gaming performance is going to be acceptable in comparison on most titles.
sundragon - Monday, September 7, 2015 - link
Um, first hand experience: Macbook Pro 2015, (Iris 6200): Skyrim, ESO, Civilization 5, Homeworld, all run at 1440x - I love all these people talk about intel integrated graphics sucking, meanwhile I'm getting crushed in Civ5 and kicking ass in Homeworld and ESO.I'm not lugging an integrated laptop around to play games, I have a laptop and I like to have ONE LAPTOP, and guess what, everything I've thrown on here has played. My MBA 2012 HD4000 struggled with Skyrim and Civ 5 but I still played. Please stop talking theoretical and talk about your actual rig... /end rant
BurntMyBacon - Thursday, September 3, 2015 - link
@retrospooty: Core2 era was more a return to parity. One of the most even matchups I can remember was the ironically similarly numbered Phenom II 955 and the Core 2 Quad 9550. Nahalem is what really did the damage. Here's hoping Zen can put AMD back in the ballpark.I do think AMD has a pretty significant GPU advantage in the area of gaming over Intel. However, as you've stated, the power/thermal constraints do not allow them to fully exploit this advantage. A CPU intense game, even if not CPU limited, will chew up much of the GPU's available thermal envelop, effectively eliminating any advantage AMD had. Granted, there are cases where the thermal solutions in play provide the necessary thermal headroom, but these are mostly found in laptops that are already using discrete chips.
MrBungle123 - Thursday, September 3, 2015 - link
The Phenom II didn't come out until after Intel had retired the Core 2 line. Everyone wants AMD to be competitive but the fact is they are miles behind Intel.MapRef41N93W - Friday, September 4, 2015 - link
Guess you didn't read the review of Broadwell Iris Pro on this very site. AMD's GPU loses by as much as 20-30% in most games vs Broadwell Iris Pro. Skylake Iris Pro will be offering up to 50% more performance.V900 - Wednesday, September 2, 2015 - link
4: Not everybody who are interested in a gaming machine can afford a Core i7 and several 1000$ graphic cards in a SLI configuration. A lot of gamers have a budget between 500$-1000$, and if you can get/get close to XB1 performance with just an Intel IGP, it would be perfect for that kind of budget.Also: Why would you think a 13' laptop with Iris Pro and 72 execution units would "fail miserably" in comparison with an XB1/PS4?!?
That's ridiculous. Any advantage the console would have is tiny.
Just get two wireless controllers and hook up the laptop to your HDTV with a HDMI cable, and the experience would be close to identical....
MrSpadge - Wednesday, September 2, 2015 - link
"Also: Why would you think a 13' laptop with Iris Pro and 72 execution units would "fail miserably" in comparison with an XB1/PS4?!?"Because he specifically mentioned this in conjunction with "user experience". The PC gives you freedom but certainly not the ease of use of a console. Which is mainly why these things exist at all.
Jumangi - Wednesday, September 2, 2015 - link
Lolz if you think an Intel only machine with any sort of Integrated graphics(even the best Iris Pro) will give you anything close to an Xbox One game your seriously naive and ignorant. Stop looking at theoretical Gflops numbers to make comparisons.IanHagen - Wednesday, September 2, 2015 - link
Well, a few posts back up you're stating that AMD's A10 APU have "far better graphics" when it failed to beat last generation Iris 5200 GPU and now there you are, talking about naiveness and ignorance.Jumangi - Wednesday, September 2, 2015 - link
Compare actual gaming on the two mr naive one. also compare the huge cost differences of these chips. An Iris Pro laptop will be far far more expensive.jimmy$mitty - Thursday, September 3, 2015 - link
Is it your love of AMD that makes you say this? Think about it. The XB1 uses DDR3 for its GPU. This will use DDR4. The XB1 has a small eDRAM cache. Skylake has a small eDRAM cache. The XB1 has a very weak AMD Jaguar based CPU. This will have a much stronger Skylake based CPU.So why is it so far fetched to think that Skylake could get close to matching the XB1? It wont outright beat it, not this one maybe the next one, but it could get close with proper optimizations and DX12.
http://www.anandtech.com/show/6993/intel-iris-pro-...
http://www.anandtech.com/show/9320/intel-broadwell...
Haswell beat the top end AMD APU at the time and Broadwell makes the current A10 look even worse.
AMD is great if you are on a budget. But if you are looking simply for performance they are lagging behind in a lot of ways.
JKflipflop98 - Sunday, September 6, 2015 - link
Ah, I wondered who would make an actually well-reasoned posting. I am not surprised to see it's you.tipoo - Wednesday, September 2, 2015 - link
I didn't say it was a good value. Just interesting how times have changed, that Intel integrated graphics are this close to a two year old console already.eddman - Thursday, September 3, 2015 - link
Yes, they "could" care less.MobiusPizza - Friday, September 4, 2015 - link
As ArsTechnica and TechReport (http://arstechnica.co.uk/gadgets/2015/09/intels-sk... has noted, eDRAM has performance advantage even for people with discrete GPUsanubis44 - Tuesday, September 8, 2015 - link
"I guarantee it that anyone interested in PC gaming could care less about Intel's IGP as any serious gamer will be getting a Skylake laptop with a Maxwell and next year a Pascal GPU."I would argue that anyone interested in PC gaming will avoid laptops like the plague and buy/build a desktop PC so they can replace graphics/ram/CPU easily and pay a lot less for a DX12 card, and on that note, anyone wanting to build a DX12-ready gaming machine right now will be getting a Radeon 290/390(X) series card and skipping Maxwell altogether, as it doesn't support hardware asynchronous shaders.
ered - Sunday, February 14, 2016 - link
Well, when the Macbook gets it, you can stream your screen to the Apple TV connect an Xbox One/PS4 controller and play like you're on console. Having similar graphics and at the same time a computer for school etc. But of course these devices are not competitors to consoles, it's just interesting what is possible.TallestJon96 - Wednesday, September 2, 2015 - link
You actually make a great point. Despite the fact that on a desktop an i5 paired with a $200 dollar gpu will crush integrated graphics, on a laptop a 72 EU cpu could do some serious work. This paired with ddr4 could kicked integrated graphics up a notch, which is good for everyone, as it raises the lowest common denominator.Like you say, it probably won't be long until integrated graphics catch up with the Xbone, especially as they have a CPU advantage in many cases, and with ddr4 they have VERY similar system memory. It'll be a few more years after that til ps4 is caught up with. I would add that tablets will probably catch the xbone before the end of this generation. It could be an interesting future, where games could come to tablet, pc, and consoles simultaneously.
Stochastic - Wednesday, September 2, 2015 - link
"... as it raises the lowest common denominator." That's the important bit. One reason there aren't more PC gamers is simply that there aren't that many people who have modern PCs powerful enough to run today's games. This limits the technical ambition of PC games as developers have to keep in mind the wider PC audience and not just us tech enthusiasts. If integrated graphics can continue improving generation to generation, in a few years time even $600 laptops will be capable of running games at comparable fidelity to the Xbox One. Adding substantive amounts of eDRAM to all integrated GPUs would go a long ways towards making that dream a reality.flashpowered - Wednesday, September 2, 2015 - link
I am hoping to replace my Arrandale laptop with an ultrabook, and really hope that the 15w or 28w Iris with eDRAM can give me something with a high resolution display and smoother running UI than Retina Haswell/Broadwell.JKflipflop98 - Sunday, September 6, 2015 - link
Dude, if you're still running Arrandale, just about anything you buy at this point is going to be a *MAJOR* upgrade across the board.tipoo - Tuesday, September 1, 2015 - link
"Intel’s graphics topology consists of an ‘unslice’ (or slice common) that deals solely with command streaming, vertex fetching, tessellation, domain shading, geometry shaders, and a thread dispatcher. "This part of their architecture seemed like the weak spot which led to little scaling between 1/2/3 slices going from DX11 to DX12. So will that remain the same with Skylake, or are there other differences that will allow better scaling with DX12?
extide - Wednesday, September 2, 2015 - link
That sentence is actually incorrect (the quote).The unslice is not the same as slice common.
The Unslice is whats not in the slices, obviously, but the 'Slice Common' is what IS in the slice, but which ISNT the EU's themselves..
extide - Wednesday, September 2, 2015 - link
So, for example, GT2 has 1 Unslice, 3 Slice Commons (1/slice) and 24 EU's (8/slice).extide - Wednesday, September 2, 2015 - link
Actually it's GT2 has 1 Unslice, 1 Slice, 3 sub slices, 3 Slice Commons (1/subslice) and 24 EU's (8/subslice).ltcommanderdata - Tuesday, September 1, 2015 - link
Apple appears to take the Core M and use it in a 7 W cTDP up configuration in the Retina MacBook. I wonder if the increase in performance would be worth the increased heat and power consumption to use U-class processors in a 7.5 W cTDP down configuration instead? Or even try to accommodate 9.5 W cTDP down U-class processors to take advantage of the GT3e to better drive the retina display?Kutark - Tuesday, September 1, 2015 - link
I feel like i should know this already, but what are they referring to with the whole 4+2, 2+2 etc etc?Kutark - Tuesday, September 1, 2015 - link
Nm, i think i figured it out. 4core + GT2, or 2core +gt2, etc etcHideOut - Wednesday, September 2, 2015 - link
Yep, nailed it. (the +2/3/4 is GT2/3/4)Braincruser - Wednesday, September 2, 2015 - link
first number is CPU cores/modules, second part is gpu modules.extide - Wednesday, September 2, 2015 - link
Not the number of GPU modules, but the GTx name.For example GT2 has 1 Slice, GT3 has 2 slices, and GT4 has 3 slices.
nandnandnand - Wednesday, September 2, 2015 - link
FIVR: "For Skylake-Y, this also resulted in an increase for the z-height of the processor due to having leveling transistors on the rear of the package."You mean Broadwell-Y, right?
Ryan Smith - Wednesday, September 2, 2015 - link
Correct. Thanks for pointing that out.jay401 - Wednesday, September 2, 2015 - link
What's the ETA for i7-6700 to hit U.S. retail/etail shelves?HideOut - Wednesday, September 2, 2015 - link
A couple of weeks ago. I see them on newegg.com all the time.jay401 - Wednesday, September 2, 2015 - link
Doubtful. You might have seen a 6700K model briefly for a moment before it instantly sold out again, but you haven't seen a 6700 non-K model since their launch just happened (but the SKU isn't even in Amazon or Newegg yet).nandnandnand - Wednesday, September 2, 2015 - link
"This is followed up by either one, two or three slices, where each slide holds three sub-slices of 8 EUs," (page 6)SLIDE = SLICE. deleteme
Ryan Smith - Wednesday, September 2, 2015 - link
MS Word has a terrible fit with slices... Thanks!Le Geek - Wednesday, September 2, 2015 - link
I find it surprising that the reviewer failed to mention anything about the new quad core/quad threaded mobile i5s. To me these are one of the most, if not the most interesting new SKUs.Also it would be nice if these was if there was a specific new article investigating the efficiency gains from Skylake. From the Skylake K review it seemed that the 6700k consumed too much power for the performance improvements it gave. Maybe it was due to the much higher voltages. In any case it would be interesting to see a follow up article.
Cheers.
Le Geek - Wednesday, September 2, 2015 - link
If there was*Le Geek - Wednesday, September 2, 2015 - link
Anything>>muchMrSpadge - Wednesday, September 2, 2015 - link
Well, for me the mobile quad's are totally uninteresting, be it i5 or i7. I'm rather interested in what you can get out of the 6700 iwth a Z170 board.Anyway, I second the request for an efficiency investigation. The 6700K started at insanely high stock voltages, so now onder it's not better than Haswell in this regard. AT also showed some numbers with optimal core voltage, but those start at 4.3 GHz and 1.20V. Which says nothing about any of the other chips, even in stock configurations (i.e. without undervolting).
dtgoodwin - Wednesday, September 2, 2015 - link
I also add my request. Based on Intel's comments, those tests likely need to be redesigned to approximate real world usage, not user tasks, then recorded and played back as fast as possible. I'd love to see efficiency from typical desktop usage, gamer, home theater, and "enthusiast". I'm sure you have several users that recordings could be taken from and played back to simulate these. Yes, the tests would take longer to perform, but it appears that's going to be a requirement to achieve accurate efficiency tests at this point.MrSpadge - Wednesday, September 2, 2015 - link
Measuring response times might be crucial for such tests, as the throughput might not be the most important metric any more. All tasks will have finished before the end of the benchmark, if played back slowly enough.vred - Wednesday, September 2, 2015 - link
"Secret source" = "secret sauce"?jjj - Wednesday, September 2, 2015 - link
Hard to care at this point given how little they offer and at what prices.The core does seem to be rather big and far too big for low W- perf per area is very very low there. Core M is still insanely priced given how little it does. Was wondering if they'll do a hard price cut ahead of A72 SoCs since at that point it will be a lot easier for folks to realize how absurd Intel's pricing is..
A few days ago noticed a little board with a quad A7, 1GB RAM and plenty of connectors for just 15$. That kind of computing device makes you wonder about where the world would be if we weren't stuck on Wintel. The premium this monopoly adds to PCs is heartbreaking at this point. At least in mobile things are ok for now.
extide - Wednesday, September 2, 2015 - link
Core m is on the order of 10x the performance of a Quad A7. Core m is also a pretty small die, it is a similar die size to mobile SoC's. Not sure what you are talking about ... it's a different product for a different price.bji - Wednesday, September 2, 2015 - link
I'm pretty sure Intel intentionally gimped the thermal performance with inferior TIM underneath the heat spreader. I believe this is a ploy to allow them to release a part with improved TIM some time down the road as a new, higher clocking revision. I think this is a way to gate performance intentionally so that the can sell essentially the same part over several releases.Without significant competition from AMD, this is the kind of thing that Intel can unfortunately get away with.
bji - Wednesday, September 2, 2015 - link
http://www.overclock.net/t/1568357/skylake-delidde...This thread gives good evidence that the part would clock much better with better TIM.
0razor1 - Wednesday, September 2, 2015 - link
Second thatShivansps - Wednesday, September 2, 2015 - link
You guys are ignoring the G4500 on desktop, its marked as "HD530", thats either GT2 or GT1.5, since an early july drivers shows desktop name for GT1.5 as "HD530".Shivansps - Wednesday, September 2, 2015 - link
It also shows that HD510 is used for both GT1 and GT1.5 on U models, so im not 100% sure if 4405U is GT1 or GT1.5KAlmquist - Wednesday, September 2, 2015 - link
"Interestingly enough, you could still have five native gigabit Ethernet controllers on [Skylake-Y]."I think that's wrong. You will notice that all other controllers have numbers. For example, for Skylake-U, there is SATA #0, SATA #1, and SATA #2, with SATA #1 appearing twice because you have a choice of which pins that controller is connected to. When GbE appears in the chart, there is no number. I'd suggest that is because there is only a single Gb Ethernet controller, with several different choices of which pins the Ethernet controller is connected to.
Alketi - Wednesday, September 2, 2015 - link
Silicon atom radius = 111pm14nm/222pm = 63 atoms
But, who's counting? :)
Ryan Smith - Wednesday, September 2, 2015 - link
Technically that's true. However Intel's 14nm process isn't quite as 14nm as you think it is. And that goes for all the modern processes.Alketi - Wednesday, September 2, 2015 - link
What!? It's all lies???WorldWithoutMadness - Wednesday, September 2, 2015 - link
That's not how they measure it. You're thinking about an entirely different monster.Not to mention it is not true planar as in one atom thickness.
This may help you with the node concept especially Intel's 14nm
http://www.anandtech.com/show/8367/intels-14nm-tec...
AlexIsAlex - Wednesday, September 2, 2015 - link
Will we see some performance reviews of Z170 motherboards any time soon, (including POST timings)? I've been looking forward to upgrading to Skylake for a while now, but I'm not going to pick a motherboard before I know which one to go for.toyotabedzrock - Wednesday, September 2, 2015 - link
So Skylake is a shiney ball of shit with MPAA/NSA Safe harbor zones that are hardware enforced?V900 - Wednesday, September 2, 2015 - link
You need to work on your reading comprehension skills, son...jay401 - Wednesday, September 2, 2015 - link
Sounds about right, actually.asmian - Thursday, September 3, 2015 - link
Yeah, I'd like to see MUCH more comment and examination on the reasons for this. It may be being spun as something to prevent malware from attacking other legitimate processes, but it's equally a box of delights for malware of any kind if it can insert itself into a protected zone and have all its code and data safe from any debugging or monitoring. And when I say malware of any kind, NSA back door is indeed what screams loudest to me. With MS preventing users from refusing individual updates to Windows 10, that's a mass surveillance perfect storm just waiting to happen down the line unless there is intense scrutiny of any use of this.The risk in this single feature could be the deciding factor for me not to move beyond Haswell-E with Intel.
hojnikb - Wednesday, September 2, 2015 - link
Is it possible to overclock non-k parts, given that blck is uncoupled from the rest of the system ?MrSpadge - Wednesday, September 2, 2015 - link
I'd want to know that as well. And is there still "multicore optimization", i.e. max turbo freqeuncy for all cores? With an increased power budget this would make the 6700 fly.. while staying in an energy efficient regime, in contrast to full-throttle OC of the 6700K.extide - Wednesday, September 2, 2015 - link
I am thinking ... yes at least on Z170 ... but the other chipsets, I dont know. Could be really fun :)hojnikb - Thursday, September 3, 2015 - link
Could someone test this ?Even if its limited to Z chipsets, its still great. I'm sure if that is actually possible, some mobo maker will come up with cheapish Z boards (like asrock).
extide - Saturday, September 5, 2015 - link
I'm sure plenty of people will test it, once the parts are actually in peoples hands.cactusdog - Wednesday, September 2, 2015 - link
Will the Intel muthaphuckers make a CPU faster than the 6700K for Skylake?just4U - Wednesday, September 2, 2015 - link
Of course they will.. just as you go out to buy your 6700K in a year or two they will release a 6790K /w better thermal dynamics.. (Let's not hold our breath on 6 Core puppies.. Intel doesn't seem interested as long as they have no competition)prisonerX - Saturday, September 5, 2015 - link
"Better thermal dynamics"? Is that an euphemism for Intel getting better at throttling your CPU?Actually it's not a euphemism, it's exactly what it means. Certainly worth another $300!
extide - Saturday, September 5, 2015 - link
No it's a euphemism for better TIM.MrSpadge - Wednesday, September 2, 2015 - link
Any word on AVX-512, especially on those mobile Xeons? The same question applies to Xeons for the desktop socket. I suspect they'll simply be the same die, so if they have it Intel would be deactivating it for all regular CPUs. The other option would be: not all Xeons have, and the big dies actually have modified cores (as Intel hinted at IDF, without giving any details).nils_ - Thursday, September 3, 2015 - link
I'd also like to see SHA256 as a CPU instruction, and not even for bitcoin mining.zepi - Wednesday, September 2, 2015 - link
It is in a way curious, that Intel is releasing 35W TDP socketed desktop-chip when the lowest they'll do for mobile is 45W.zepi - Wednesday, September 2, 2015 - link
Oh, and in the conclusions page you are mentioning 4+3 parts. From the article I understood there are only going to be 4+4 (128MB) parts and 2+3 (64MB) parts.Later are available as both 15 and 28W parts, though it is quite possible that the GPU performance takes quite a hit with the lower TDP. I hope you'll get to do comparisons between different GPU SKU's.
watzupken - Wednesday, September 2, 2015 - link
I have my doubts about how well the 72 EU graphic will bring about significant performance improvements. Looking at the trend of integrated graphics by Intel over the last 2 gen, between a 24 and 48 EU solution, the improvements are marginal in most cases. I believe its limited by 2 things, memory bandwidth, and most importantly, the amount of power it can draw. Memory bandwidth can be rectified with the e-DRAM, but still its limited by power. I feel the difference between Skylake and Broadwell graphics is negligible if we compare the same class of processor.xenol - Wednesday, September 2, 2015 - link
I'm failing to see how transistor counts and die size mean anything to anyone else. "Hey guys, Intel has 2 billion transistors in their CPU, we should make ours have 2 billion and 1! That'll show them!"V900 - Wednesday, September 2, 2015 - link
Economics, my friend, economics...If you know a little bit about their manufacturing/fabs (FINFets, double, triple or quadruple patterning?) plus how many transistors they have in their CPU. And you know the exact W*L and total area in square mm, then it's fairly trivial to come up with some pretty good guesses about their financial model.
How many CPUs can they make in a 300mm wafer in a day? How many of a given batch do they need to throw out, and how many can they repurpose towards lower priced parts? In other words, what is Intels real cost for manufacturing a given part, and how cheap can they sell it and still make a profit?
xenol - Wednesday, September 2, 2015 - link
Part of me still doesn't believe that info should be locked away (and Anand already found the die size so you can get a good enough estimate with that and previous numbers).At the end of the day, the only thing the consumer cares about is how fast it performs. The other numbers are just for pissing contests.
wintermute000 - Wednesday, September 2, 2015 - link
pretty sure power consumption also matters, at least in mobileXenonite - Thursday, September 3, 2015 - link
Actually, it seems that power consumption is the only thing that matters to consumers, even on the desktop.All this talk about AMD's lack of competition being the reason why we aren't seeing meaningful generational performance improvements is just that: talk.
The real thing that hampers performance progress is consumers' plain refusal to upgrade for performance reasons (even a doubling in performance is not economically viable to produce since no one, except for me it seems, will buy it).
Consumers only buy the lowest power system that they can afford. It has nothing to do with AMD.
Even if AMD released a CPU that is 4x faster than piledriver, it wouldn't change Intel's priority (nor would it help AMD's sales...).
IUU - Wednesday, September 2, 2015 - link
Sorry for my tone , but "I'm failing to see", how transistor count don't mean more to consumers than to anyone else.So, after 10 years of blissful carelessness(because duuude it's user experience dat matters, ugh..),
you will have everyone deceiving you on what they offer on the price point they offer. Very convenient, especially if they are not able to sustain an exponential increase in performance and passing to the next paradigm to achieve it.
Because untill very recently we have been seeing mostly healthy practices, despite the fact that you could always meet people pointing to big or small sins.
Big example, What's the need of an igp on a processor that consumes 90 watts, especially a gpu that is tragically subpar? To hide the fact they have nothing more to offer to the consumer, cpu dependent, at 90 watts(at the current market situation) and have an excuse for charging more on a
theoretically higher consuming and "higher performing" cpu?
Because, what bugs me is what if 6700k lacked the igp? Would it perform better without a useless igp dragging it down? I really don't know, but I feel it wouldn't.
Regarding the mobile solutions and the money and energy limited devices, the igp could really prove to be useful to a lot of people, without overloading their device with a clunky, lowly, discrete gpu.
xenol - Wednesday, September 2, 2015 - link
If the 6700K lacked the iGPU with no other modifications, it would perform exactly the same.MrSpadge - Wednesday, September 2, 2015 - link
Yes, it would perform exaclty the same (if the iGPU is not used, otherwise it needs memory bandwidth). But the chip would run hotter since it would be a lot smaller. Si is not the best thermal conductor, but the presence of the iGPU spreads the other heat producers a bit.xenol - Wednesday, September 2, 2015 - link
I don't think that's how thermals in ICs work...MrSpadge - Wednesday, September 2, 2015 - link
Thermodynamics "work" and don't care if they're being applied to an IC or a metal brick. Silicon is a far better heat conductor than air, so even if the GPU is not used, it will transfer some of the heat from the CPU + Uncore to the heat spreader.My comment was a bit stupid, though, in the way that given how tightly packed the CPU cores and the uncore are, the GPU spreads none of them further apart from each other. It could have been designed like that, but according to the picture on one of first few pages it's not.
Xenonite - Thursday, September 3, 2015 - link
No, it wouldn't. You could easily spread out the cores by padding them with much more cache and doubling their speculative and parallel execution capabilities. If you up the power available for such out of order execution, the additional die space could easily result in 50% more IPC throughput.MrSpadge - Thursday, September 3, 2015 - link
50% IPC increase? Go ahead and save AMD, then! They've been trying that for years with probably billions of R&D budget (accumulated over the years), yet their FX CPUs with huge L3 don't perform significantly better than the APUs with similar CPU cores and no L3 at all.Xenonite - Thursday, September 3, 2015 - link
Yes, but I specifically mentioned using that extra cache to feed the greater amount of speculative execution units made available by the removal of the iGPU.Sadly, AMD can't use this strategy because Global Foundaries' and TSMC's manufacturing technology cannot fit the same amount of transistors into a given area, as Intel's can.
Furthermore, their yields for large dies are also quite a bit lower and AMD really doesn't have the monetary reserves to produce such a high-risk chip.
Also, the largest fraction of that R&D budget went into developing smaller, cheaper and lower power processors to try and enter the mobile market, while almost all of the rest went into sacrificing single threaded design (such as improving and relying more on out of order execution, branch prediction and speculative execution) to design Bulldozer-like, multi-core CPUs (which sacrifice a large portion of die area, that could have been used to make a low amount of very fast cores, to implement a large number of slow cores).
Lastly, I didn't just refer to L3 cache when I suggested using some of the free space left behind by the removal of the iGPU to increase the amount of cache. The L1 and L2 caches could have been made much larger, with more associativity to further reduce the amount and duration of pipeline stalls, due to not having a data dependancy in the cache.
Also, while it is true that the L3 cache did not make much of a difference in the example you posted, its also equally true that cache performance becomes increasingly important as a CPU's data processing throughput increases.
Modern CPU caches just seem to have stagnated (aside from some bandwidth inprovements every now and then), because our CPU cores haven't seen that much of a performance upgrade since the last time the caches have been improved.
Once a CPU gets the required power and transistor budgets for improved out of order performance, the cache will need to be large enough to hold all the different datasets that a single core is working on at the same time (which is not a form a multi-threading in case you were wondering), while also being fast enough to service all of those units at once, without adversely affecting any one set of calculations.
techguymaxc - Wednesday, September 2, 2015 - link
Your representation of Skylake's CPU/IPC performance is inaccurate and incomplete due to the use of the slowest DDR4 memory available. Given the nature of DDR4 (high bandwidth, high latency), it is an absolute necessity to pair the CPU with high clockspeed memory to mitigate the latency impairment. Other sites have tested with faster memory and seen a much larger difference between Haswell and Skylake. See Hardocp's review, (the gaming section specifically) as well as Techspot's review (page 13, memory speed comparison). Hardocp shows Haswell with 1866 RAM is actually faster than Skylake with 2133 RAM in Unigine Heaven and Bioshock Infinite @ lowest quality settings (to create a CPU bottleneck). I find Techspot's article particularly interesting in that they actually tested both platforms with fast RAM. In synthetic testing (Sandra 2015) Haswell with 2400 DDR3 has more memory bandwidth than Skylake with 2666 DDR4, it is not until you pair Skylake with 3000 DDR4 that it achieves more memory bandwidth than Haswell with 2400 DDR3. You can see here directly the impact that latency has, even on bandwidth and not just overall performance. Furthermore in their testing, Haswell with 2400 RAM vs. Skylake with 3000 RAM shows Haswell being faster in Cinebench R15 multi-threaded test (895 vs. 892). Their 7-zip testing has Haswell leading both Skylake configurations in a memory-bound workload (32MB dictionary) in terms of instructions per second. Finally, in a custom Photoshop workload Haswell's performance is once again sandwiched between the two Skylake configurations.Clearly both Haswell and Skylake benefit from faster memory. In fact, Skylake should ideally be paired with > 3000 DDR4 as there are still scenarios in which it is slower than Haswell with 2400 DDR3 due to latency differences.
Enthusiasts are also far more likely to buy faster memory than the literal slowest memory available for the platform, given the minimal price difference. Right now on Newegg one can purchase a 16GB DDR3 2400 kit (2x8) for $90, a mere $10 more than an 1866 16GB kit. With DDR4 the situation is only slightly worse. The cheapest 16GB (2x8) 2133 DDR4 kit is $110, and 3000 goes for $135. It is also important to note that these kits have the same (primary) timings with a CAS latency of 15.
So now we come to your reasoning for pairing Skylake with such slow RAM, and that of other reviewers, as you are not the only one to have done this. Intel only qualified Skylake with DDR4 up to 2133 MT/s. Why did they do this? To save time and money during the qualification stage leading up to Skylake's release. It is not because Skylake will not work with faster RAM, there isn't an unlocked Skylake chip in existence that is incapable of operating with at least 3000 RAM speed, and some significantly higher. Hardocp was able to test their Skylake sample (with no reports of crashing or errors) with the fastest DDR4 currently available today, 3600 MT/s. I have also heard anecdotally from enthusiasts with multiple samples that DDR4 3400-3600 seems to be the sweet spot for memory performance on Skylake.
In conclusion, your testing method is improperly formed, when considered from the perspective of an enthusiast whose desire is to obtain the most performance from Skylake without over-spending. Now, if you believe your target audience is not in fact the PC enthusiast but instead a wider "mainstream" audience, I think the technical content of your articles easily belies this notion.
MrSpadge - Wednesday, September 2, 2015 - link
The primary reason Skylake is only qualified for DDR4-2133 is that there is no faster JEDEC standard yet. That's also the reason AT tested at that, I think. However, I agree: restraining an "enthusiast OC CPU" like that seems silly, especially given the very small price premium of significantly faster DDR4. I hope some memory scaling analysis is included in future Skylake articles.techguymaxc - Wednesday, September 2, 2015 - link
Good point, you are correct in that this is the root cause, my analysis suggests the reason is one step further down (or up, depending on your view) the ladder.wintermute000 - Wednesday, September 2, 2015 - link
can the non K chips operate above 2133 or are they locked to 2133 (like non K haswell @ 1600)techguymaxc - Thursday, September 3, 2015 - link
Someone correct me if I'm wrong here, but I believe that is a motherboard feature. As long as you're using a Z series chipset-based motherboard, you *should* be able to run faster memory.MrSpadge - Thursday, September 3, 2015 - link
Sounds plausible.. but noone is talking about that. Damn it! Not even a single review of the regular 6700, and it has been on the shelves in better quantities than the 6700K for 2-3 days (Germany).Impulses - Friday, September 4, 2015 - link
Not available or even listed anywhere yet stateside... :/NikosD - Monday, September 7, 2015 - link
Actually, non K Haswell like Core i7 4790 using a Z chipest can most of the times reach DDR3-2133MHz with no problems.There is only multiplier lock from Intel, not memory lock to 1600MHz.
Xenonite - Thursday, September 3, 2015 - link
Actually, enthusiasts only focus on achieving the lowest possible system power.It seems that most people wouldn't mind further gimping Skylake's performance by underclocking DDR4-2133 to the lowest allowed setting, in the name of needless desktop power savings.
nils_ - Thursday, September 3, 2015 - link
You haven't seen German energy prices ;)Xenonite - Thursday, September 3, 2015 - link
Yea I'm sure there are a lot of readons why we are in this situation (buying a $100 CPU and then having to spend more than that on electricity costs seems a little silly if you are on such a tight budget to begin with); but I still find it increadibly sad that, even if you are willing to spend $1000 on the best CPU performance that money can buy (Xeons aren't overclockable, so a heavily overclocked consumer CPU is normally able to beat almost all single-socket workstations, even in heavily multi-threaded workloads), you are still constrained by some arbitrary power limit.Being willing to spend your hard earned cash on $1000~$2000 of power bills per year still doesn't change the fact that you cannot, and will NEVER be able to, build a system that can make proper use of that amount of power for maximum high-framerate gaming performance.
(Ps. I currently live in South Africa, where our electricity costs are pretty much on par with, and sometimes (if your skin color, like mine, isn't black) even greatly exceeds those of Germany. Heck, once you pass
beastiful - Wednesday, September 2, 2015 - link
And the point of releasing Q170 when vPro CPUs aren't ready yet is?jhh - Wednesday, September 2, 2015 - link
I expect that most of the mobile Xeon will end up in an embedded system which needs the reliability of ECC, such as wireless core network, including functions such as transcoding which can use the GPU for audio/visual processing. The GPU actually makes for a pretty effective DSP, although not necessarily as power-efficient as a DSP.piasabird - Wednesday, September 2, 2015 - link
A lot of CPU'S formerly Broadwell are in very short supply and often the suppliers are changing much more for them than they are worth. This is particularly true about IRIS graphics on the desktop.blakflag - Wednesday, September 2, 2015 - link
At this point the only interest I have is for a cheap, ultra efficient media server to fill out my new, as-yet-unused mini-itx case. My Sandy bridge i5 is still doing quite well for my power machine. When the lower end stuff comes out I'll take a look.Hannibal80 - Wednesday, September 2, 2015 - link
Wonderful articleHannibal80 - Wednesday, September 2, 2015 - link
Doh! First comment, first fail. The "wonderful" was for the one about mobile cpu core count. What a shameHannibal80 - Wednesday, September 2, 2015 - link
By the way, this is still a good one ☺Freakie - Wednesday, September 2, 2015 - link
Silly question, but in the DMI 3.0 section, is 1pJ/bit supposed to be 1 picojoule per bit?extide - Saturday, September 5, 2015 - link
Yessonicmerlin - Wednesday, September 2, 2015 - link
"For an architecture change, users (us included) have come to expect a 5-10% generation on generation increase at the same frequency"Uh... I expect a lot more than 5-10%. With ARM we get 30--40% every year.
prisonerX - Thursday, September 3, 2015 - link
You can't make a purse out of a sow's ear.BMNify - Saturday, September 5, 2015 - link
"prisonerX - Thursday, September 03, 2015You can't make a purse out of a sow's ear."
hmm
https://libraries.mit.edu/archives/exhibits/purse/...
Report: "On the Making of Silk Purses from Sows' Ears," 1921
https://libraries.mit.edu/archives/exhibits/purse/...
extide - Saturday, September 5, 2015 - link
Yeah that's because they are starting with a MUCH MUCH less refined architecture. It's easy to improve a ton on something that isnt as good to begin with... Plus, there is only so much you can do, you quickly run into diminishing returns.This is seriously like one of the simplest concepts ever but people still don't seem to get it....
BMNify - Saturday, September 5, 2015 - link
"extide: Plus, there is only so much you can do, you quickly run into diminishing returns."that's a subjective POV, if you where to remove the base sram and dram and replace them with 10s of femto seconds Correlated Electron RAM and/or Mram in both sram and new wideIO2 configurations for these Arm invested NVram and role them out in the usual arm/foundry collaborations then you begin to see the latest advertised Intel far slower new "3D XPoint" as a sub standard technology in comparison ....
http://www.zdnet.com/article/arm-licenses-next-gen...
Galatian - Thursday, September 3, 2015 - link
"For users who actively want an LGA1151 4+4e configuration, make sure your Intel representative knows it, because customer requests travel up the chain."Who do I need to talk to? Seriously I didn't get Broadwell because I knew Skylake was right around the corner. I mean why introduce a pocketable 5x5 platform, just to announce that you have no plans to actually release the perfect processor for that platform?
Valantar - Thursday, September 3, 2015 - link
"For Skylake-U/Y, these processors are not typically paired with discrete graphics and as far as we can tell, the PCIe lanes have been removed from these lines. As a result, any storage based on PCIe (such as M.2) for devices based on these processors will be using the chipset PCIe lanes."According to Intel Ark, the 15W U-series CPUs (at least the i5s and i7s (including the Iris 6650U), which I looked at) have 12 PCIe 3.0 lanes, available in "1x4, 2x2, 1x2+2x1 and 4x1" configurations. Worth updating the article?
Valantar - Thursday, September 3, 2015 - link
And reading on, I suddenly realize why you said what you did. 12 lanes does indeed line up with the ones from the PCH-LP. Does this point toward more of an SOC-like integration of features for U-/Y-series CPUs?BMNify - Thursday, September 3, 2015 - link
"A lot of these improvements are aimed directly at the pure-performance perspective (except L2 and FMUL to an extent), so you really have to be gunning it or have a specific workload to take advantage."i cant believe that to be true, as its a tock and yet no real world view can call this tock an improvement never mind "so you really have to be gunning it or have a specific workload to take advantage." as the real world x264/x265 show no benefit what so ever here....
also Ian, was it an oversight on your part that in all the 9 pages analysis you did not point out the missing generic "AVX2 SIMD" in most of all these launched today.... please note that the official Intel slides pacifically remove any mention of any AVX SIMD in their latest charts etc.
it seems a clear cut choice on intels part to try and stop news outlets from mentioning and pointing out the lack of 2015 class SIMD on many of these soc released today.....
can you at least go through the included charts and point out all the cores/soc that Do Not include generic AVX2 SIMD to make it clear which cores/soc to actually buy (anything with AVX2+) and what new/old soc to discard (anything with only antiquated 2006 sse4 SIMD)
Xenonite - Thursday, September 3, 2015 - link
Actually, consumers will actively avoid AVX2 instruction set capable processors, since they could use more power (especially on the desktop, where Intel's power limiter allows AVX2 to really boost application performance / power consumption)BMNify - Monday, September 7, 2015 - link
i dont see any logic to your "consumers will actively avoid AVX2 instruction set " comment as by definition "SIMD" ( Single Instruction, Multiple Data) describes computers with multiple processing elements that perform the same operation on multiple data points simultaneously.so in fact AVX2 256 bit SIMD does exactly the opposite of wasting more power compared to the older/slower data paths today, also its clear in the mobile/ultra low power devices where "Qualcomm's new DSP technology boasts heavy vector engine—that it calls Hexagon Vector eXtensions or HVX—for compute-intensive workloads in computational photography, computer vision, virtual reality and photo-realistic graphics on mobile devices. Moreover, it expands single instruction multiple data (SIMD) from 64-bit to 1,024-bit in order to carry out image processing with a wider vector capability...." is in fact doubling even intel's as yet unreleased 512bit AVX3 , with their lowest power 1,024-bit SIMD to date, although its unknown weather its a refreshed NEON or another complimentary SIMD there... we shall see soon enough.
vred - Thursday, September 3, 2015 - link
Huh? What are you talking about? AVX/AVX2 support is no different compared to Haswell - Core i7/i5/i3 CPUs have it, Pentium and Celeron do not.nils_ - Thursday, September 3, 2015 - link
I wonder if they'll come out with a Xeon-D based on Skylake soon. And are they skipping Broadwell E?BSMonitor - Thursday, September 3, 2015 - link
Current MacBook Pro's are still Broadwell?? Did Apple get Skylake early??bodonnell - Thursday, September 3, 2015 - link
The lineup looks pretty good but honestly following x86 processors has been pretty boring since Sandy Bridge as each new generation brings only pretty mundane evolutionary changes. Fingers crossed that AMD can actually bring something competitive to market with Zen and shake things up a little as under-performing Bulldozer and it derivatives have really let Intel rest on it's laurels.zaiyair - Thursday, September 3, 2015 - link
The article says that vPro enabled processors were NOT launched. However, all the following processors are vPro enabled and were launched at the event:i7-6700
i7-6700T
i5-6500T
i5-6500
i5-6600T
i5-6600
http://ark.intel.com/compare/88196,88200,88183,881...
I'm confused, can anyone explain this?
extide - Saturday, September 5, 2015 - link
I think he was talking about the mobile chips with vPro.ueharaf - Friday, September 4, 2015 - link
intel mobile i7-6820HK doesn´t have SIPP 2015, what does it mean ?Mark_gb - Friday, September 4, 2015 - link
So launch day Was Sept 1. Its now Sept 4, and I still see none of these anywhere...Has Intel decided just to keep doing paper launch after paper launch? They use to actually make CPU's...
Fredgido - Friday, September 4, 2015 - link
It has yet to be explained the 10% performance loss on Linpack benchmark that you get from going from cannonlake to skylake. :SFredgido - Saturday, September 5, 2015 - link
20% here http://puu.sh/k0mAu/a82b686201.jpgextide - Saturday, September 5, 2015 - link
Well, a decrease in performance from cannonlake to skylake would be correct. However, I assume you mean haswell, not cannonlake, and that is probably be due to the L2/FMUL changes. However you are also looking at chips with different clockspeeds, with haswell having a faster clock so that also contributes to this result.It is somewhat disappointing that Intel has decided to make changes that significantly favor power consumption over performance.
I have a feeling the Xeons will not have these same changes, so it will be interesting to see what the Skylake E5's are like...
shodanshok - Sunday, September 6, 2015 - link
Mmm, I have the very opposite feeling: I think that these changes were done explicitly to the benefit of server and mobile chips. These two categories (server and mobile) are greatly limited by their power usage (and by their ability to effectively remove the generated heart), while are only marginally dependent on FPU performance.So trading some performance for improved power efficiency suddenly make a lot of sense, especially if Intel want to continually increase Xeon's for number (and it seems so).
SeanJ76 - Saturday, September 5, 2015 - link
Not impressed...exmachiner - Monday, September 7, 2015 - link
Why is there no Desktop SKU with GT4e/Iris Pro ? Will it launch at a later date ? There is an Iris Pro version in Broadwell IIRC.ZachSaw - Monday, September 7, 2015 - link
I'd be interested to know what its relative performance is vs a discrete card like 750ti when it comes to the SM5.0 version of NNEDI3 with MPDN. Intel GPUs surprisingly run twice as fast with the shader version as compared to the OpenCL version (AMD loves shader too - the only exception here is NVIDIA's Maxwell architecture). It'll be interesting to see if Skylake is the perfect HTPC!janolsen - Tuesday, September 8, 2015 - link
If I understand pg 3 slide correctly , eDRAM will only be for BGA - and thus no Iris iGPU for desktop, broadwell chips may be a bit faster for those not needing a GPU for gaming and similar.HollyDOL - Wednesday, September 9, 2015 - link
Hm, is this a paper launch only? Only available parts until now are 6600K and 6700K (all czech big e-shops as well as ones like newegg). Awaiting 6100T eagerly (want to build mITX baby for my mom since her old (ancient) computer died 2 weeks ago)... and for obvious reasons I'd rather prefer new platform than the old one in case there was ever need to upgrade something (which I doubt but still...)qasdfdsaq - Thursday, September 10, 2015 - link
I feel like there's something missing here. We get 15w dual-core parts with Iris GT3e, but quad-core parts are all 45w with no GT3e. Indeed, there's no quad-core mobile chips with Iris graphics although Broadwell and Haswell both had them in the 45w quad-core range. There's certainly no issue fitting it in the power envelope, given you can literally fit 3x 2-core chips with GT3e into the 45w TDP.LDW - Friday, September 18, 2015 - link
I like to have a laptop for its portability and am not willing to buy a second system for my occasional gaming. In my experience, games like civ 5 , civ be and skyrim are happy with two processors but would like more graphics power than my current laptop. (i7-4700MQ with no additional graphics chips)...To my surprise, I find that the H series of processors have less graphics power than the U series. I suspect that the U series 2 processors, 4 threads would be just fine for the games I play and I know they would like the additional graphics power. So I'm likely to be looking at the U series as I look at replacements for my current laptop, not the H series as I expected.
I'm curious if others reach that conclusion as well.... and am looking forward to anandtech's future comparisons between the H and U series graphics capabilities.
ldw
francisca euralia - Tuesday, October 20, 2015 - link
hello, can u give me a sumary od this page with the most important definition?Bulat Ziganshin - Saturday, December 12, 2015 - link
> On the other side of the coin, the FMUL (floating point multiply) has increased in latency over Broadwell, and returned to the same as Haswell. We are told that this is due to design decisions that allows for better performance when it comes to creating enterprise silicon, which is an interesting explanation in itself.they mean AVX-512 support, that required microarchitecture changes unfavorable for old avx-256 code
systemBuilder - Friday, December 25, 2015 - link
Wow, in most respects, GPU FLOPs performance peaked in Haswell generation, and is barely back to the same levels in Skylake! They have just been trying to cut power consumption 3 years of no progress in GPU throughput!systemBuilder - Friday, December 25, 2015 - link
Shameful! What are they thinking, giving the customer NOTHING NEW in the past 3 years?