Comments Locked

76 Comments

Back to Article

  • austinsguitar - Friday, June 23, 2017 - link

    ya ya alright. show me the performance improvement from the 22nm 4790k and the 14nm 6700k and then we will be talking. you can shrink all day but if things dont change under the hood you will get nothing global foundaries. stop this performance crap +40 is a lie and you know it!
  • austinsguitar - Friday, June 23, 2017 - link

    and i'm still talking power draw :/ not just performance.
  • Samus - Friday, June 23, 2017 - link

    I think CPU's in the mainstream are at a peak. Almost nothing in the consumer space is CPU limited anymore, especially in common office applications. Even most games are GPU limited before they are CPU limited. And in ultra high computing applications, GPU's take the crown (most supercomputers are based on GPU's or non-x86 architecture)

    So AMD's strategy to improve performance per watt (what Intel's original strategy was 10 years ago) is admirable, especially since it makes sense more than ever, and the strategy was a huge success for Intel after Netburst.

    But to think someone can out-manufacture Intel is almost inconceivable. Because it's never been done. For 40 years, Intel has been at the forefront of lithography.
  • olde94 - Saturday, June 24, 2017 - link

    Well i can tell that a 2500k @4.3 ghz (so basicly a i5 6600 no overclock) is just barely enough for VR gaming. Other than that, i think you'r right, though some gaming bechmarks with the new amd chips have showed that single core performance can make a difference of up to 10%
  • Nagorak - Saturday, June 24, 2017 - link

    In what situation are you CPU limited? I was using a 2500K for VR (still in use in another family member's machine) and I never noticed a problem. I was GPU limited in all circumstances, and it's easy to make yourself more GPU limited by increasing the super sampling.

    The only time I actually had a problem was with Job Simulator, and that was some sort of bizarre bug, which I believe they finally sorted out.
  • Samus - Saturday, June 24, 2017 - link

    VR should never be CPU limited. It's pushing pixels (usually dual 1080p displays.) I'm sure there are some more demanding CPU applications but outside of 3D modeling like Solidworks, a program that is almost always CPU bound for simulation and physics tasks, I don't see how a typical graphic intense VR app is going to care what CPU you have as long as it's modern, 3GHz and at least dual core.
  • Tabalan - Saturday, June 24, 2017 - link

    Nah, I'm using Solidworks in my job (creating models of electrical routes) and I can say it's pretty much single threaded. Similar case with Inventor, also mostly on 1 core. Things change completely when you render something in these programmes, then it consumes all cores you got.
  • mapesdhs - Tuesday, June 27, 2017 - link

    Blows my mind that, sooo many years after SGI sold Inventor, it's still not been fully recoded to take advantage of multiple cores. I remember test-loading Inventor models into perfly in the mid 90s on an Onyx2 and seeing huge speedups (four or five times faster) because Performer automatically uses all available resources.
  • Meteor2 - Sunday, June 25, 2017 - link

    Any AAA game with a 1080 Ti or better.
  • Scannall - Saturday, June 24, 2017 - link

    You have to remember that GloFo's lithography has been paid for with billions of dollars worth of IBM's research. Not very many companies out there, including Intel spend more on R&D. Nor hold as many patents. When IBM went open source on a lot of it's tech, and shared a lot of its tech in these alliances, it's a big thumb in Intel's eye. Just because you can't play Tomd Raider on a Power9 CPU doesn't mean their tech sucks.
  • Santoval - Saturday, June 24, 2017 - link

    Eventually even GPUs might reach a dead end. They are equally limited by lithography nodes, transistor densities and clock speeds. Their density directly depends on smaller nodes, and if <7nm nodes do not ever materialize, or are too expensive to be mass produced, they will also plateau. The only other alternative is switching to bigger dies, which is out of the question for obvious reasons (you can go bigger for the next generation, two generations tops, but then you hit a die size wall), or higher clocks, which is ruled out due to TDPs constraints. GPU clocks will probably reach their limit in the 1.7 - 1.8 Ghz range, maybe 2 Ghz tops with GAA Fets.
  • JasonMZW20 - Thursday, August 3, 2017 - link

    GPUs will also go the multi-chip module route because they're advancing faster than lithography tech. It'll be interesting to see how that is handled by Nvidia and AMD. We know AMD will use Infinity Fabric in its GPUs starting with Vega, so technically, they could start developing an MCM GPU package using smaller, more manageable dies that are tied through IF.

    Knowing Nvidia, they're already doing the research and development work needed for MCM implementation on their end. They may come up with a novel approach to tying all of the modules together without introducing bottlenecks in the architecture or introducing awful latencies. We'll see.

    I don't expect to see MCM GPUs before 2020.
  • Isaacc7 - Saturday, June 24, 2017 - link

    Dwarf Fortress is CPU limited, and it's a free game!
  • anubis44 - Sunday, June 25, 2017 - link

    @Samus: "But to think someone can out-manufacture Intel is almost inconceivable. Because it's never been done. For 40 years, Intel has been at the forefront of lithography."

    AMD doesn't need to 'out-manufacture' Intel. They simply needed a sufficiently good process tech, which they currently have for Zen, to manufacture 180mm 8 core CPU dies with 80%+ yields, because they designed a memory controller that is specifically designed for near-limitless scaleability. This allows them to make every CPU they sell out of the same 8 core (2 CCX) dies. Need a 16 core AMD Threadripper? No problem. Stick two 8 core Zen dies onto the same substrate, connect them electrically and voila! Instant 16 core CPU that actually scales to double the speed of the single 8 core CPU. Need a 32 core/64 thread Epyc server CPU? No problem! Stick 4 x 8 core Zen dies on to the same substrate... Exactly.

    Intel cannot pull a memory controller like the Infinity Fabric out of their a$$. It takes at least 18 months for them to do this, even with all the king's soldiers and all the king's men.

    To paraphrase Scotty from Star Trek: "I cannaugh change the laws of physics! I've got to have 18 months!" Unfortunately, Intel doesn't have 18 months. Epyc is already being sold instead of Intel server CPUs right now.
  • edzieba - Sunday, June 25, 2017 - link

    "Intel cannot pull a memory controller like the Infinity Fabric out of their a$$. It takes at least 18 months for them to do this, even with all the king's soldiers and all the king's men."

    They don't need to. Their modular mesh fabric (MoDeX) has been deployed in Knights Landing based Xeon Phis already, and will be available to consumers within days in Skylake-X. Rather than integrated modules that are joined in sets of 4 (similar to how older dual- and quad-core designs were made a decade ago) the modules are individual cores and interfaces that hook into a network.
  • Meteor2 - Sunday, June 25, 2017 - link

    Can't wait for TR and Epyc to be benchmarked against Skylake-X. Then we'll really know the lay of the land.
  • SaturnusDK - Sunday, June 25, 2017 - link

    So far from Skylake-X reviews it seems the MoDeX is a gigantic failure. It is responsible for the massive power consumption increase and at the same time at least partly responsible for their lackluster performance. So unless they can correct that with firmware updates, it's not looking good for Intel.
  • Meteor2 - Sunday, June 25, 2017 - link

    I expect Intel to lose its process lead crown simply because it laid off so many engineers. They've cut their R&D back so far, it's an open goal for GloFlo and friends.

    7 nm will help GPUs as well as CPUs too, of course.
  • phillock - Friday, January 12, 2018 - link

    You have to remember that GloFo's lithography has been paid for with billions of dollars worth of IBM's research. Not very many companies out there, including Intel spend more on R&D
  • TemjinGold - Friday, June 23, 2017 - link

    Seeing as GF didn't make either of those chips, I'm not seeing the lie.
  • austinsguitar - Friday, June 23, 2017 - link

    this just proves a simple die shrink will not solve all the woas, of performance per watt. its just manufacturing jargon that gets on my nerves.
  • StevoLincolnite - Friday, June 23, 2017 - link

    I just wish fabs would stop using 'nm' as a marketing angle. Global Foundries 7nm is likely not a true 7nm process.
  • Gondalf - Saturday, June 24, 2017 - link

    GloFo never was on track in recent years. Why to believe it will be this time on a difficult 7nm process that will be the first GloFo experience on FinFets ???? (without licensing).
    IMO we need to add 12 months to all these projections.
  • Scannall - Saturday, June 24, 2017 - link

    Because of about a bazillion IBM engineers. No other reason. But that's enough.
  • FreckledTrout - Sunday, June 25, 2017 - link

    Exactly Scannall, back in 2015 Global Foundries bought out IBM's Microelectronics division which netted them 16,000 patents and a ton of seasoned engineers. IBM has always been ahead of Intel on the R&D / engineering side so these engineers surely will help immensely. The great upside is AMD will have a competitive manufacturing process to Intel's and wont be a node or two behind anymore like that had been until Ryzen landed.
  • gruffi - Tuesday, June 27, 2017 - link

    You cannot compare recent years with transitions (from AMD and IBM), canceled processes (e.g. 14XM) etc with the upcoming years. Now Glofo is on track again. And don't worry, they have more than enough experience with FinFETs. You also don't need to add 12 months to these projections. You just need to understand what it means. For example, entering 7nm mass production doesn't mean you can buy 7nm CPUs the next week.

    Btw, Intel was almost one year late with their 14nm process. Now they are late again with 10nm. Postponed it more than once. It's not like only Glofo has to cope with schedules. ;)
  • Meteor2 - Sunday, June 25, 2017 - link

    Steady on. Everything I've seen suggests GF's 7 nm would also be 7 nm under Intel's naming rules. TSMC's is not, though.
  • Nagorak - Saturday, June 24, 2017 - link

    I'm not sure what you're talking about. Performance per watt has improved quite a bi. Ryzen is way more power efficient than any previous AMD processors. The issue isn't performance per watt, the problem is absolute performance has largely stalled.
  • Samus - Saturday, June 24, 2017 - link

    Right, but do we need more abolsute performance? It's always great to innovate, but CPU's are bound by so many other bottlenecks (from hardware interfaces to lacking software optimization) that it's pointless for AMD to make ultra high performance cores.
  • Meteor2 - Sunday, June 25, 2017 - link

    Because those bottle-necks, which I don't personally recognise anyway (our HPC is Top20), will be overcome.

    Perf/Watt is what matters, and new nodes is what makes most improvements to that metric.
  • Sarah Terra - Friday, June 23, 2017 - link

    Awesome, if you build it.... they will come 8)
  • Sivar - Friday, June 23, 2017 - link

    Each new process has a new set of metrics with little regard for the preceding or proceeding process. Take a look at the TSMC 20nm process, for example. It was all but a complete failure, causing nVidia to go back to the drawing board, but TSMC's previous and following process delivered.
  • Kvaern1 - Friday, June 23, 2017 - link

    "Each new process has a new set of metrics with little regard for the preceding or proceeding process. Take a look at the TSMC 20nm process, for example. It was all but a complete failure, causing nVidia to go back to the drawing board, but TSMC's previous and following process delivered."

    I don't really think being all but a complete failure and being the exclusive process for the Iphone CPU of its time computes.
  • name99 - Saturday, June 24, 2017 - link

    Define "complete failure" for TSMC 20nm.
    It gave Apple a better CPU for that year (A CPU that works well enough that it has been Apple's 'low-end go-to CPU' for their second tier products like Apple TV and iPad mini.
    It probably made a profit for TSMC.
    It was used by other vendors than Apple, including both low end and high end (SPARC M7).

    The only way in which you can claim it was a failure is if your metric of the world is that the ONLY thing that matters is what the GPU vendors do for their discrete products. And if that's your model, well then all the Intel processes are also a failure.

    Does this matter? Yes it does. I'm sick of these idiotic analyses of fab processes that look at ONE customer (and that's what it always boils down to, a single data point) and throw out a hot take based on that single customer. We're seeing the exact same thing right now with TSMC 10nm --- expect to be flooded with a series of equally inane claims that TSMC 10nm is a failure because it, likewise, gets used by Apple for only a year, and nV (or some other large customer) doesn't utilize it right away.
    We're dealing with massive technology complexes here that take years to create, that persist for years, and whose importance consists not only of the products immediately fabbed on them but also what they teach for the next process. And yet we have these people on the internet running around insisting that these complexes be treated like a children's game, over in half an hour and then to be assigned a score and a winner!
  • name99 - Saturday, June 24, 2017 - link

    Oops, and HomePod --- forgot about that one.
  • Lolimaster - Friday, June 23, 2017 - link

    Considering that right now Ryzen has the highest efficiency per watt in the x86 land and that efficiency goes to sick levels when you downclock the Ryzen 7 1700 a bit.

    800 Cinebench 15 point with just 35w of power consumption (on pat with a i7 6700 at stock clocks, a 65w+ chip).
  • Gondalf - Saturday, June 24, 2017 - link

    You have a strange manner to measure the performance/watt parameter. I don't think it will be accepted by IT managers.
    Try again my friend
  • Dr. Swag - Friday, June 23, 2017 - link

    It's not as apparent when you ramp up clocks but when you look at mobile SoCs the clock speeds going from haswell to skylake or Kaby went up a lot for the same amount of power draw. Usually these metrics aren't general, but rather at specific frequencies.
  • mga318 - Friday, June 23, 2017 - link

    uhm...even if for some bizarre reason Global Foundaries had made those intel chips, that still wouldn't matter. When it comes to die shrinks, GF only job is process nodes and die shrinks. GF doesn't do architecture designs of any kind. They have nothing to change "under the hood."
  • anubis44 - Sunday, June 25, 2017 - link

    IBM, Samsung, Global Foundaries and AMD will crush Intel. It's all part of the 'virtual gorilla' strategy that Jerry Saunders was following back in the late 1990's/early 2000's. AMD has managed to form enough strategic alliances to beat Intel. Teamwork is winning over a closed eco-system.
  • Rοb - Sunday, July 23, 2017 - link

    Austinsguitar wrote: "... show me the performance improvement from the 22nm 4790k and the 14nm 6700k and then we will be talking. ...".

    I tried to find a Website that people will believe is more fact based than biased, I didn't want to spend forever doing that, this is what I came up with:

    Intel 14nm i7-6700k (2015) vs. 22nm i7-4790k (2013): http://cpu.userbenchmark.com/Compare/Intel-Core-i7...

    Summary: Wait 2 years for a 5% improvement in performance, a doubling of Memory Capacity at faster speed, and an internal GPU Upgrade, for $60 less. The buyer response was described as: "Hugely higher market share".

    ---

    For the next Comparison let's go to the next CPU.

    Intel 14nm i7-7700k (2017) vs. 22nm i7-4790k (2013): http://cpu.userbenchmark.com/Compare/Intel-Core-i7...

    Summary: Wait 4 years for a 15% improvement in performance, a doubling of Memory Capacity at faster speed, and an internal GPU Upgrade, for $12 less. The buyer response was described as: "Hugely higher market share".

    ---

    Intel 14nm i7-7700k (2017) vs. 14nm i7-6700k (2015) didn't bring anything but a 10% performance increase and another GPU Upgrade.

    Going back from 2013 the 22nm i7-4790k compared with the 32nm i7 X 990 was a big step forward; 2 fewer Cores, less than half price and a similar performance increase for going from the i7-4790k to the i7-7700k.

    Some comparisons favor your point and others do not. I didn't drink the "Intel Kool-aid" but a lot of people did; and they did Bench better than AMD.

    The "pattern" is not GloFo, the common denominator is Intel-unchallenged.

    Intel rolls their own, AMD is Fabless since GloFo separated and has bought into IBM's Foundry as well - you can't use Intel CPU comparisons to chart the direction and performance of other companies.

    GloFo's most recent claims for their 7nm process are: https://www.globalfoundries.com/technology-solutio... - "7LP technology delivers more than twice the logic and SRAM density, >40% performance boost and >60% total power reduction compared to 14nm foundry FinFET offerings.".

    IF the greater than 40% were only 30%, and the greater than 60% were only 50% THEN applying those numbers to the Epyc CPU we get 30% performance increase and 50% lower TDP - let's double the TDP by doubling the Cores thus also doubling the performance to a (Multicore) 60% increase.

    That's 64 Cores at 3.2GHz*160%= 5.2 GHz in 2H 2018 with the same TDP as the newly released (2017) Epyc 7601.

    I'm interested in the 2018 7nm Epyc. I'm not sure how Intel's numbers got thrown into this. Don't know why you're "ya ya ya-ing GloFo".
  • Lolimaster - Friday, June 23, 2017 - link

    I can see 16 cores Ryzen 7's for the AM4 mobo with the Zen 2 7nm upgrade.

    TR II will be updated to 24cores with 7nm Zen2, and 32cores with Zen3 7nm+. Lego design of Zen just killed intel.
  • MajGenRelativity - Friday, June 23, 2017 - link

    While I do share your positive outlook for Zen cores on smaller nodes, I think it's wise to refrain from making statements like "Lego design of Zen just killed Intel." It very well could, but we haven't seen signs of that happen yet. Intel still makes lots of money.
  • bigboxes - Friday, June 23, 2017 - link

    No one is saying any of that. We're just happy to have a competitive product once again. It has been some years since I was all AMD. I look forward to increased features and competitive pricing. I've been Intel/Nvidia in recent years. Can't wait to build a new AMD rig in the near future.
  • Hurr Durr - Friday, June 23, 2017 - link

    It`s just an amd shill, relax.
  • T1beriu - Friday, June 23, 2017 - link

    16 cores and dual-channel don't make any sense.

    I think Zen 2's performance increase will come 90% just from the frequency increase that comes with 7nm (4.6-5Ghz) and the rest 10% just from tiny architecture tweaks.
  • tarqsharq - Friday, June 23, 2017 - link

    Yeah, I agree.

    Also, if they stick to 8 cores for AM4 and do a die shrink, they could decrease the cost per chip and probably increase thermal overhead for OC.
  • shing3232 - Friday, June 23, 2017 - link

    I remember Zen 2 will have 6Core per CCX.
  • SaturnusDK - Sunday, June 25, 2017 - link

    It's just not clear at this point if the additional 2 cores will be fully functional ones, or specialized ultra low power cores (presumably Jaguar+ cores from the Xbox One X) that background tasks can be off-loaded to freeing up resources when at full load and dramatically decreasing idle consumption.
  • PixyMisa - Friday, June 23, 2017 - link

    AMD have announced that the successor to Naples, Rome, will have up to 48 cores. So either they're going to squeeze 6 dies onto the package, or Zen 2 will have 12 cores.

    6 dies is unlikely because of the way the dies and chips are interconnected. So it looks like we'll get 12 cores in 2018 - and maybe 16 in Zen 3.

    Which as you say is a lot for dual-channel RAM. I assume AMD will be increasing cache sizes and working on faster memory support as well. DDR5 is still at least 3 years away, so no help from that quarter for a while.
  • turtile - Saturday, June 24, 2017 - link

    Adding more cores will require redesigning the architecture in order to work with current motherboards. Each 8 core complex has it's own dual memory controller. So it you keep adding more 8 core complexes, you need more memory slots. They could make 12 core complexes but it would lower the bandwidth from the memory and require more work.
  • jjj - Friday, June 23, 2017 - link

    Samsung has provided more details that that, there is a 5nm and 4nm , with 4nm using GAA.
    TSMC has their 7nm HPC and the version targeting outo,not just what you list.
    And ofc there is FD-SOI for both GF and Samsung and that one is relevant enough in consumer- lower end mobile, image sensors, IoT.
  • Banykrk - Friday, June 23, 2017 - link

    -on TMSC roadmap -> not CLN12ULP but CLN22ULP and this was official, yes 22nm planar
    -TSMC 7FF+ is EUV and this is also official information
    -in GF 22nm SOI ULP and ULL and 14nm SOI is missing
    I included only logic nodes
  • Frenetic Pony - Friday, June 23, 2017 - link

    Next year should be great for upgrading your current laptop. Both AMD (which will have Zen2 based mobile parts) and Intel will be putting out new nodes. Some quick math shows current 15 watt parts could squeeze into sub tablet space. And CoreY like parts could fit into (larger) phone like form factors!

    Imagine a Windows phone ditching the "Phone" OS part and just running full Windows. Or a decent gaming laptop fitting into a modern "ultrabook" like form factor. Hell yeah.
  • Hurr Durr - Saturday, June 24, 2017 - link

    Eh, I wish it was that rosy. Realistically, new nodes will first need to be optimized on desktop, so we`re likely two years away from gaming notebooks in our pockets. I`d much rather have a new phone anyway, Lumia 730 is nice as a daily driver, but certainly shows its age.
  • TheinsanegamerN - Saturday, June 24, 2017 - link

    Perhaps. It depends if AMD grows a backbone and demands OEMs make decent machines. I've had a lenovo L440 for years. I wanted a kaveri powered machine, and then a carrizo machine. All I could fine were cheap 15 inch machines with single channel memory and cheap casings.

    AMD making a great chip is half the battle. Wrangling OEMs in and making them behave is another battle entirely, one AMD has refused to fight thus far.
  • Meteor2 - Sunday, June 25, 2017 - link

    It's always amused me that it took Microsoft to show hardware companies how to produce quality. It's like they'd given up. Shame Microsoft has too; perhaps they think 'job done, back to our core'.

    Intel's Ultrabook was a fine initiative also. AMD needs something similar but they also can't spread themselves too thinly.
  • mapesdhs - Tuesday, June 27, 2017 - link

    Funny thing about MS, their OSs drive me nuts, but I'm usually impressed with their peripheral tech. Went hunting for a wifi mouse with a good range, the MS basic model worked the best by miles.
  • KAlmquist - Friday, June 23, 2017 - link

    One way to go from a 14nm process to a 7nm process is to cut the feature size in half, which would reduce the area by a factor of 4. The Global Foundaries is promising an area reduction of a factor of 2, which suggests that GF has doubled the marketing hype.

    If this is true, it is good news for the development of semiconductor technology going forward. We are facing physical limits on how small features can be made on silicon based semiconductors. It may be physically impossible to develop a 1nm process on silicon using feature size reduction alone. But if GF is able to double the amount of marketing hype each generation, it will be able to hit 1 nm in six generations without reducing feature size AT ALL.

    I expect that there are limits to what marketing hype can achieve. Ultimately, marketing is limited by the gullibility of the public. But Moore's law, which predicts a doubling of density every two years, has had a good 40 year run, and it's not clear we've hit the end yet. If increased marketing hype has anything close to as good a run as increased density, we can look to see advances in marketing hype driving semiconductor technology for many years into the future.
  • SaturnusDK - Friday, June 23, 2017 - link

    I think it shows remarkable good business sense on the part of GloFo to realize that trying to develop a 10nm production process, investing in manufacturing facilities for that, and having to run it until it has at least repaid the costs involved in that would not make sense for their customers, primarily AMD, when just stretching the 14nm process a bit further would enable them (AMD) to leapfrog a process node ahead their competition (Intel).

    The brilliance is the staged development and realizing what can be done now, and what can be done later. Accepting that yields and profits might be a bit lower with current DUV technology and then improving both as EUV evolves. And note that GloFo focus on performance. The 7nm process is squarely intended for performance products like AMD CPUs and GPUs compared to the low power 7nm process, primarily for ARM SoCs, that Samsung and TSMC plans to implement first.

    That means that Intel will only have about 6 month or maybe less process lead with Cannonlake before AMD will leapfrog them and overtake the process lead. And it doesn't seem like Intel has any concrete plans for implementing 7nm any time soon.

    It's difficult to see this in any other way than this spells big trouble for Intel going ahead. One of their biggest selling points have always been a process lead that allowed their chips to be clocked higher and therefore have a greater single core performance because the IPC gains will be minuscule going forward.
  • smilingcrow - Saturday, June 24, 2017 - link

    You've missed the fact that Intel and GF use different terminologies when describing a node with Intel's being more upfront.
    On that level alone it's hard to compare them plus the lithography size is only one metric so it's impossible to compare them at this point.
    But the very fact that we are having this discussion does in itself point to the change in the industry.
  • Meteor2 - Sunday, June 25, 2017 - link

    Yep, it was a management master-stroke to skip 10 nm. Sometimes all it takes is one big, bold move (and make it stick of course. Boeing did with the 707 and 747 but not with the 787, to give some examples).
  • 0ldman79 - Friday, June 23, 2017 - link

    I'm very happy with my 14nm Skylake. I'm pretty interested in what they can do with 10nm or 7nm power wise.

    It would be very interesting if they focused on power draw rather than processing power on the next run. Let the software and hardware level out. We're already at a pretty impressive level of hardware, let's get this level of performance with a 16+ hour battery life.

    I get 10 hours out of a Dell Inspiron 7559 out on a tower site and it'll play Arkham Knight when I get home. 7nm would be downright nice.
  • iwod - Saturday, June 24, 2017 - link

    Will Future AMD Radeon be Fabbed on GF as well? I am wondering if this is the reason for delay with AMD's graphics department. TSMC is too occupied with Mobile SoC from Apple and Qualcomm. And GF, having the tie and contract for AMD will have to design High performance node anyway, which fits both Zen and Radeon, in the best case scenario AMD will have a node advantage then Nvidia.

    I wonder how Zen 2 and Zen 3 is going to play out. We know Zen 2 is on 7nm, which means it will come in 2H18, a little later then what i expect.

    I am eagerly waiting for Zen + Vega APU. That is going to be perfect in everyway. And then 7nm APU will only make it better. CPU performance has nearly reached commodity level for most, and it is the GPU that matters. Intel's GPU aren't bad, but all GFX acceleration for Web pages, UI rendering or other daily usage still work better on AMD / Nvidia. And all this are discounting the lead on gaming and pro uses.
  • Meteor2 - Sunday, June 25, 2017 - link

    Vega appears to still be far behind Pascal in efficiency terms; I think a Ryzen CPU and a small Pascal GPU (e.g. 1050) is and will be a better bet than an Vega APU.
  • mat9v - Sunday, June 25, 2017 - link

    If GloFo 14nm process has the same effect on Vega as it has on Zen then APU can be both fast and energy efficient as long as it is clocked right, it may very well be more power efficient then Nvidia on low clocks. Say, take R1700, cut half of the cores, get 45W chip (at 3.2Ghz + some turbo), slap on 2k SPs Vega GPU clocked at say 1Ghz for maybe 50W and you get a good APU for desktop. Lower clocks and put it in laptop for 50W part (R1700 clocked at 2.8Ghz at 0.75V can eat 35W, strip half cores for 25W chip, add small Vega for 25W and you get Haswell i7 sized chip with good GPU performance).
  • Threska - Sunday, June 25, 2017 - link

    Depp UV. Next might be X-ray.
  • SaturnusDK - Monday, June 26, 2017 - link

    No. Deep UV, or DUV, is the technology all fab makers use today. Extreme UV, or EUV, is the next step. X-rays is still at least a decade away from being needed.
  • The Von Matrices - Monday, June 26, 2017 - link

    Silicon is transparent to X-rays. You need some other material from which to manufacture your transistors before you can even consider X-rays.
  • SaturnusDK - Monday, June 26, 2017 - link

    Silicon is transparent to some forms of x-rays. X-rays are just short wavelengths of electromagnetic radiation. EUV wavelengths at 13.5nm are already extremely close to technically being x-rays which start at 10nm. X-ray lithography is entirely possible with the type of soft x-rays also called BEUV (Beyond Extreme UV) down to 6.7nm wavelengths but eventually the frequency limitations of silicon will necessitate moving to other materials anyway.
  • Threska - Monday, June 26, 2017 - link

    Which I believe IBM was doing in the story.
  • FourEyedGeek - Tuesday, June 27, 2017 - link

    Don't GlobalFoundries have a history of giving dates of technical advancement and never making those dates, with huge delays?

    A QUESTION:
    To people smarter than me, is there a benefit to x86 CPU's being redesigned to have multi-tiered cores? Example: 8 core CPU with 2 cores (no HT) set to the maximum the silicon can handle, say 6GHz, while the other 6 cores (with HT) run at half the maximum silicon can handle, say 3GHz. The most demanding processing threads are placed onto the faster cores.

    I know that programming is taking advantage of Parallelism, but there are still many applications that cannot. I also know there is Turbo Boost, but that cannot run 100% of the time. Anyway, I don't know and wanted other people's thoughts. Thanks
  • pav1 - Friday, June 30, 2017 - link

    Single digit nanometer sizes are here.. what next. I suppose rather than further miniaturization, we need to look at customization of silicon for the application for further performance instead. We're just getting started with machine learning and AI, which will need unique silicon. ASICs which learn and optimize their architecture based on usage. Architectural improvements are a big bottleneck, once hooked up with machine learning, things should improve.
  • FourEyedGeek - Monday, July 3, 2017 - link

    I was just thinking about this, we have have CPUs as a general processor and a GPU as a hybrid graphical and machine learning processor. Is there any benefit to seperating GPU and Machine learning components so they could potentially be more effective at their primary task?

    So computer systems have the CPU for general processing, GPU for graphics and those that wish it MPU for machine learning. Does graphics cards take an impact performance wise for including machine learning or do they simply add more silicon increasing costs to consumers?
  • Anymoore - Friday, July 28, 2017 - link

    So actually EUV 7nm still requires double patterning, from that diagram.
  • Anymoore - Sunday, October 14, 2018 - link

    "Intel demonstrated pelliclized photomasks that could sustain over 200 wafer exposures, but we do not know when such pellicles are expected to enter mass production." So the mask will burn up within hours of exposure?

Log in

Don't have an account? Sign up now