The tech market right now is an example of why we want competition between silicon designers/manufacturers. Intel is trying to pull up their socks to catch back up to AMD, and it's creating better product for the end users, and at better prices. I think we all forgot the glory days of when Intel and AMD were in good competition (Athlon 64 XII/Phenom days anyone?), and price to performance was very good. But that was over 10 years ago now...
AMD will have 200 cores soon. Imagine being able to do 200 things at once on your home computer. So what if Intel does 50 things faster, I will be able to do 200 things.
All that matters is being able to multitask on as many things as possible. If I run a docker instance, that is one thing. If I use Cortana, which I very much enjoy using for my personal assistance, then that is a second thing. AMD is very good at releasing chips that do lots of things at once. It doesn't matter if they do each thing a lot slower as long as they can do many things. I am looking forward to all the games that can use hundreds of cores at once. Maybe a good chess game? Then AMD will be the best at chess games.
AMD have proven that at any time they can release processors with many more slower cores than any other company. 200 or 400 cores will be no problem. Would you buy a processor that ran everything perfectly that was 10 cores or would you buy one with 400 cores just to say you are the best at doing most things slowly, but many things very fast? PKZIP will be no problem for the 400 core AMD chip while the competitors will be chugging away. The 200 core games will be very good.
Once Intel cancels Xe, expect them to try again with a "x86 GPU". Presumably something like 96 Atom cores with AVX512 or something.
And it will work just as well as every other time they tried it (for values of "they" which also include Sony thinking at first that cell could replace a GPU).
Putting 96 cores on a chip is one thing. Building the network fabric to hold it together is another. And supplying the I/O is the worst part. The only way this is happening is if you could stack the CPU on top of all the RAM, but that has mostly been given up.
Those things don't need an entire core to themselves. Having 8-16 cores is plenty for even a power user's desktop, until you get in serious multi threaded workloads, A/V editing/rendering, for example.
95% of the consumer market doesn't want "200 cores to do 200 things" they want adequate performance to do basic things with 12+ hours of battery life, small footprint and a lot power bill.
Keep hallucinating. There won't be a consumer 32-core CPU any time soon, even for Threadripper which is intended for creator/prosumers 128-core isn't on the horizon anytime soon, let alone 200+ cores.
Memory bandwidth, thermal and cost constraints simply won't allow it.
You'd be crazy to think either of these companies (AMD or Intel) will exist in their current forms in just a few years. x86 needs to die. Any legacy requirements can be appropriately emulated using virtual machines or code morphing on alternative architectures.
The disturbing part is AMD and Intel are seemingly in denial over this because neither have any ARM or alternative IP-designs in their pipeline for the next 5 years. Seriously in 5 years ARM will have 50% of the market and you can bet that'll shift other consumer segments currently dominated by x86 (such as game consoles)
Emulators aren't the most efficient way of doing things, but it's hard to deny where computing seems to be heading. Neither will abandon it until they have to, and even then there will be a market to support x86 for decades after.
Serious question. I've been hearing this same line for over 20 years. But still haven't heard any reason that actually solves a practical problem. It's always some variant of "it's old" "it requires legacy support" "intel is evil" "RISC is faster" yadda yadda. Basically it's complaining about imaginary problems that engineers have mitigated into oblivion years ago, or an idealogical conviction that immature/imaginary technologies will magically avoid the compromise and tradeoffs that every mature technology has endured.
"ARM will have 50% of the market"
50% of which market? There's already billions of ARM devices out there in the wild. they're wildly popular. I wouldn't be surprised if there's more ARM cpus active in the world than there are x86 cpus active in the world.
ARM is proving itself successful, x86 is remaining popular, and technology solutions are better than ever despite (or because of) this current coexistence.
"Any legacy requirements can be appropriately emulated using virtual machines or code morphing on alternative architectures."
This has been the mating call of x86 replacement advocates for decades, including the inventor of x86 itself back in the 90s, and none of them proved economical. So while you may be right this time, I'm not going to bet on you being more prescient than the thousands of engineers who were funded by billions of dollar across numerous projects in the past.
Whenever x86 sunsets, whatever new software is written at the time will coincidentally be written to support its replacement architecture.
Nah, ARM is coming. Apple are leading the way but the others will jump ship pretty rapidly when it is shown that no, you don't need to pay both the x86 monetary and technical debt tax to be competitive.
OS developers as well as all other major software firms are slowly moving away from x86 altogether. We are witnessing a beginning to the slow death of x86. It may take 5 or 10 years, but it is starting now.
Couldn't disagree more. I want Intel to be competitive to AMD. I think part of the problem with the last decade is that Intel was complacent, because AMD was no threat until Ryzen. I don't want AMD to get complacent either.
Krzanich was an engineer with decades of experience, wasn't he? Jen Hsen has a technical background, and Nvidia's advances are still far better than what Intel did over a similar timeframe, but it certainly appears true that AMD is managing to put out their most competitive salvo in years with Nvidia in years. Turing was a stumble, and Ampere is great, but RDNA2 looks to be making up a lot of lost ground.
I don't think having a tech background is the only thing that determines success, although it often seems like it won't hurt. I agree with most other posters - having two competitors will definitely be better than having one.
"Krzanich was an engineer with decades of experience, wasn't he?"
He was an undergraduate chemistry major that started as a process engineer at Intel. To my knowledge, electrical engineering was never in his wheelhouse and he followed a more conventional managerial path in rising in the ranks. He is most certainly not at the intellectual level of whizzes like Lisa Su whose corpus of doctoral texts on electrical engineering are par excellence.
I'm not sure what you mean by Turing was a stumble? You mean because it didn't achieve the same level of performance gain as Pascal? The problem is that these designs don't exist in a vacuum. NVIDIA chose to focus on RTX and ML that generation, which is paying off because it's now a second generation technology while AMD is only now going to offer it as a first generation technology next year, which means NVIDIA might be on their third generation when AMD is still on their first.
2nd generation 3080? What's that supposed to be? If you're talking about the variant with 20GB RAM, that's hardly "second gen" and people are going to be very disappointed when they realise how pointless it is to have so much VRAM at this point in time.
Overall I'm loving the idea that we shouldn't believe RDNA 2 will release this year despite the evidence to the contrary, and that it'll be terrible despite evidence to the contrary, but that we should believe there will be a "2nd generation 3080" despite there being *no* evidence in favour of such an assertion. Classic FUD.
I think the best thing for the market is for Intel to take a bit more of a beating. Intel was complacent and/or they had run of real engineering problems (probably a bit of both) and AMD came back with a great CPU design but AMD's market position is still pretty weak. Intel still pushes the OEMs around in the server and notebook market. Its apparent in the product stacks but what is less apparent is whatever deals are happening behind closed doors.
"Cutting deals" of varying degrees of legitness and shadiness are always going to exist but the asymmetrical nature of AMD and Intel's position in the market in terms of capacity and capital make the market inherently unhealthy. It dosn't need to be 50/50 market share but I think Intel still needs to be taken down a peg or two so AMD can position itself (and the market along with) for the long term.
Have to agree. I see Intel voluntarily bloodletting and find it hard to see that as anything other than overdue consequences for their corporate mentality.
I'm not sure how much Intel can push OEMs around in the server room after AMD could supply the chips and Intel couldn't. Also the really big boys (AWS/Google/MS) don't need the OEMs, and AMD is getting in there.
It looks like their old tricks are working just fine in the notebook market, although this is the first generation that AMD has really tried to crack it. OEM's might just think it isn't worth annoying Intel until AMD can get a few generations shipping.
None of us are privy to exactly what each of those incremental refinements were, but they did gradually result in higher performance and/or improved yields, both of which are worthy achievements. Even though Intel's 10 nm started out poor, if they are equally persistent in their refinements, it will end up performing very well. But by then, the competition will have moved on from 7 nm to 5 nm. Intel needs to pick up the pace. How does TSMC do it so well, and so quickly?
Nice article! Ian, wait until you have a Super Enhanced Enhanced SuperFin nodes. That'll be fun. Probably the first table where I've seen a 10+ as original followed by 10 without the + as the NEXT gen !
should have just called it 10.0, 10.1, 10.2, etc. or 10r1, 10r2, 10r3, etc.
the problem with the plusses wasn't the numerical incrementing, it was that past 3 of them it gets hard to parse the exact number quickly since they're repeating shapes, in addition to the "++ = +1" thing around any computer topic.
except that's categorically untrue. Each revision of the 14nm process has improved upon the previous, notably so.
look at how poorly the initial Broadwell clocked and scaled (they didn't even release mainstream performance 5xxx chips!) vs. Cannon Lake and tell me they're the same process.
Except you can wrong and it does mean something. They have different design rules FFS. You can't just call it 14nm when in reality you must redesign and tape out again.
I don't blame you: you want to keep your contacts fairly happy. But Intel have now reached a point where customers just can't tell what they're buying.
It's not just the process names. It's not just the crazy numbering scheme, although if it wasn't for ark.intel.com, it would be.
We've long had the situation where laptop OEMs can change the effective performance of a processor by changing the effective TDP, but with Tiger Lake Intel have been taking things way further. We're now at the position where it is effectively impossible to choose between laptops based on performance, because you just can't tell how the processor will perform. The OEMs won't tell you how they've configured their laptops beyond a processor model number.
I, for one, am giving up on Intel marketing. If I want performance I can be fairly sure about, it looks like I have to buy AMD (and they're not as consistent as they should be, due to different laptop thermals. They don't have the option there: Intel do, but aren't taking it.)
Jerry Pournelle once wrote that "years ago when AT&T tried to market PC's I said that if they bought Colonel Sanders they'd advertise hot dead chicken." Intel seem to have forgotten that dig.
>We're now at the position where it is effectively impossible to choose between laptops based on performance, because you just can't tell how the processor will perform.
Intel's misleading and backhanded TDP tactics aside: "the CPU model doesn't tell you the performance" has been true since Kaby Lake R, when Intel first moved to quad-cores.
There is genuinely no "one benchmark score". These might as well be different process nodes and in wildly different places in the product stack. It's impossible to know what power limits have been chosen.
Ever since laptops became thermally constrained, every notebook's TDP (PL1) / PL2 / Tau and thus performance is completely configurable and not standardized.
This blame also should go back to laptop manufacturers, as well, who damn well know the TDP (PL1) / PL2 / Tau that they explicitly programmed and designed around.
HP picks a 35 W PL2. Dell picks a 50 W PL2. Acer's Efficiency mode, if chosen, sets a 25 W PL2.
In the end, I hope AMD sticks it to Intel by demanding AMD OEM manufacturers to clearly label their long-term and short power limits. Nothing but competition will force Intel to change.
It is stupid to criticize Intel for delivering a microprocessor that offers OEMs a flexibility to optimize the parameters for different use cases. If you want to know the performance, ask the OEMs (or don't be lazy and read the reviews).
Not stupid at all. It's entirely within their power to enforce certain design parameters. It's to their benefit not to, though - this way they can upsell higher-performing CPUs to unwitting consumers more easily.
Enforce what? Don't be ridiculous. Besides, system performance depends on many factors, not just CPU anyway. Whoever is buying a computer based solely on the CPU SKU deserves whatever they are getting anyways.
"Enforce what" - minimum performance / cooling standards? It's not that hard, Intel already have a bunch of standards that OEMs must adhere to. No validation, no sticker, done.
Here's a bizarre idea - you could even have separate model numbers for OEMs to use depending on the performance their implementation is capable of. To pluck an idea out of thin air, you could add a U on the end for the ultra-low-power and add an H for high-performance. 😏
The rest of your post is just waffle. System performance depending on many factors doesn't mean it's okay to sell CPUs that perform differently under the same damned product name. Saying "it's up to the customer" doesn't absolve Intel of that deception in the first place. Next you'll be trying to sell me on your "multi-level marketing" scheme, caveat emptor.
Process size follows generation in all cases, so 14 nm comes after 5775C. My nomenclature is very simple, each generation starts with no "+" subsequent nodes are appended with a "+" for each subsequent generation, so that 14++++++ is the 7th generation at 14 nm.
If Intel can play the name game to hide their seven year stall at 14 nm and now at what appears to be 10 nm, perhaps Intel should just drop the XX nm altogether, go with 7XL for Rocket Lake and L for Alder Lake, when they get to 7 nm it would be M and 5 nm would be S.
Face it, Intel has so scotched up all their processor names and node names to date, Bronze, Silver, Gold and Platinum, G, L, X, K, KF, R, MX, XM, T, F, H, KFA and TKFC (Totally Krispy Fried Cooker). Intel has more processor and node names then they do processors!
not true, both Broadwell (5775C) and Skylake (6700K) are built on the same 14nm process—probably a "slightly" optimized one, but Broadwell is based on Haswell that is process node shrink (Tick) while Skylake is based on new architecture (Tock).
Incorrect. Broadwell was the "first crack" at 14nm and Skylake's 14nm variant had improved characteristics on the process front, as well as a newer architecture. Yields for Broadwell were not as good, and it needed higher voltages, while high clocks were not as easily attainable as on a similar core design at 22nm (Hardwell). The process improvements ameliorated those issues.
You are so delusional. They don't have that many nodes. Process improves all the time, but some improvements require no change on the design front, some requires a simple re-spin, some require design changes to take advantage of design rule changes.
The ones requiring no re-spin is not a new node at all, because those are generally counted as yield improvements.
I just realized I have never owned a 14nm product till end of 2019 as a laptop, i7 950, i5 4460, i7 4720HQ, i7 4790, and those lasted till I got this i7 9750H as AMD 4000's hadn't been released yet, desktops are 3600X and 3900X.
Kind of crazy how well those CPU's stood the test of time, since I remember before my 950 upgrading about every 1.5 years.
I’m asking because I’m not sure if intel mentioned that Alder lake will be manufactured on 10ESF. If they did (and I just didn’t capture) then I’m more than a happy person. I just hope the next gen golden cove + 10 ESF will be great
The problem is that line width of 14nm, 10nm, 7nm, etc. is a lousy way to describe Fab processes. It's worse than Apples vs. Oranges, it's more like Apples vs. Orangutans. Seeing people get overly hyped on process steps is just plain stupid. Look at the benchmarks for the apps you care about, that's all that really matters.
I totally agree. People obsessed with process names should check this article - https://hexus.net/tech/news/cpu/145645-intel-14nm-... It shows that transistor density of Intel 14nm+++ is close to that of AMD/TSMC 7nm.
They are not. After advent of finfet (and maybe even before it) process names do not carry any useful information about the merits of the process. It's just a name of the menu item in foundry's catalog. If they wanted to, Intel could name their next process 1nm. They won't. Nobody cares.
Process names stopped relating to most structure sizes way before FinFet, and I'm well aware that the name itself - on its own - doesn't convey useful information about the process. What they do convey is which process came after which for a given foundry, they imply significant difference such as a decrease in average feature size, and sometimes they convey a general idea of which industry generation the process belongs to. what they don't tell you is whose foundry produces smaller and/or more performant transistors, but as I said in the first place, they're useful as a simple reference point for discussion; you can't easily discuss something that doesn't have a name.
Intel won't name their next generation 1nm because it wouldn't be the next logical step after their current generation. You're literally proving yourself wrong by pointing out that they won't do that.
You also totally skipped past copping to the falsehood that 14nm+++ is "close to" TSMC 7nm. 14nm++ is around 37.22 MTr/mm² (Source: https://en.wikichip.org/wiki/mtr-mm%C2%B2 ) while Renoir on 7nm measures in at about 63 MTr/mm².
That TEM cut looks an awful lot like SRAM block, which has notoriously poor scaling with design node. SRAM cells from 28nm generations are not double the size of 14nm SRAM cells, more like 20% larger. However as you point out in your next comment, node naming stopped being a useful metric, somewhere between 65nm and 45nm. The industry stuck with it based on IEEE roadmap of full-node step naming, but it doesn't remotely align with the reality. The old standard was full width half pitch, since the center to center distance between gates was approximately aligned with the channel width. Intel led the industry departure from that standard, instead measuring the effective gate length. In later generations of planar nodes (45nm and smaller) gates were packed densely enough that FWHP was not accurately representing gate width anymore, it was SMALLER than the gate dimension. With FINFETs that actually went a bit far in the other direction since the control surface width of a fin is not the same as a planar device, since the gate wraps over the fin and the surface area is the important factor in design and operation. For the 14/16nm generation of devices, the minimum feature size of interest was actually 6-7nm for Intel, TSMC, and Samsung nodes.
These are all from "IC Knowledge LCC" as posted by electronics weekly. Transistor density (MTx/mm2) Intel 10nm: 106 Samsung 5LPE: 133.56 TSMC 5FF: 185.46
Then from other places, this is Wikipedia, but following links to check that they're right: TSMC N7FF (First generation 7nm): 96.5
So TSMC's first 7nm generation is nearly as dense as Intel's first 10nm, interesting. Meanwhile TSMC's N7FF+ is 114, Apple's A13 chip is built on this.
Their 5nm node is supposedly 186, it's used for Apple's A14 chip.
That number for Intel 10nm isn't accurate - it was their original target and it hasn't been reached in practice. It looks like they had to relax a lot of their design rules to make the process yield well.
Anandtech's quoted number for density with Lakefield is 49.4 MTr/mm², while apparently Tiger Lake is closer to 40 MTr/mm².
Billions of dollars go into process steps. Talking about next gen processors isn't all about performance - it's the industry of the hardware that goes into building machines to enable those processes. Hundreds of thousands of jobs, supply chains, the works. So yes, we do care about process node technology. A whole friggin lot.
I once heard from an Intel exec that told a bunch of press who started asking about 10nm that 'process node doesn't matter'. I came down on him like a ton of bricks. I haven't seen him speak to the press since. I hope it's not you.
Intel used to be run and lead by real subject matter experts - people with Ph.D.s or Master degrees in chemistry, physics or material sciences. I wonder which business school that executive got his degree from, but he embodies the approach that subject matter Know-how isn't really required to run a place. Unfortunately, the destination is then often "the ditch".
"business school that executive got his degree from, but he embodies the approach that subject matter Know-how isn't really required to run a place."
the mantra of the typical, i.e. Ivy League, MBA - "Managing is a separable skill, can be applied to any business, and we are the masters of managing." that's why most American business has been shifted out-of-country: it saves a few pennies per unit. China didn't steal American jobs, the MBAs gave them away (Nixon "opened" China for the cheap labour, not its 'consumer' market which didn't exist), since it was no skin off their noses. in due time, only MBAs will have moolah to buy all that Chinese made stuff.
on one hand I get it, you are venting and frustrated like so many others. on the other hand, get a life. you are making a mountain out of a molehill. They have already recognized the naming scheme issue, and as you said its a big company and it takes time to correct and do better, they are in process of doing so, drop it already and MOVE ON.
Get a life? What? You're not privy to the dozens of emails going back and forth about what products are what process nodes because they keep being corrected and recorrected. I've had financial analysts, those who follow this stuff but perhaps not to the detail we do, reach out and say that this article makes it a lot easier to understand the what and the why. So yeah, get a life. Sure thing bud.
I hope Ryan is okay (health- and otherwise)! I actually got a bit concerned - this launch is a classic "Ryan does a deep dive review" moment. Or, did you guys at AT get on Jensen's sh#"list so they wouldn't send you review samples? But if, I can't see why that would be!
Yeah, I too am sad that we haven't received his insight on Ampere, as a lot of the content put out by other sites has left me wanting (and don't get me started on the YouTube soft-serve junk).
On the flip side, I'm even more upset about how much of CA is on fire. 😫
Focus on the process node as a nominal feature size is over. Why can't we just let it go?
This Tiger Lake bit, with SuperFin, would not be the same without that new capacitor design for the metal layer. A stack of materials, each layer on the order of three Ångstroms, that's nuts.
As long as we are fetishizing the light source wavelength or whatever, let's talk about the level of complexity that must be addressed.
So maybe Intel will not discuss design rules or validation protocols; that's intellectual property that they rely upon every bit as much as the frickin' ASML laser beams. Okay, you can't get them to comment, or provide slides, so not much to write about.
But we might at least entertain the notion that a godlike, perfect nano bot might well assemble some device at a 14nm scale that far exceeds what is considered possible in 2020.
So much appreciating the inquisitive and specialistic work you are doing in Anandtech. The safe heaven I can always look at for objective unbiased analysis. Waiting for your Ampere GPU review.
Windows NT when it launched could support x86, DEC Alpha, MIPS and other CPU architectures transparently. If only they'd stuck with that - we'd have ARM laptops and desktops as a matter of course now, and Intel/AMD would have been a whole step forward than where they are with proper competition between competing instruction sets.
"Intel ever wants to become a foundry player again." Funny, I thought they were basically running all fabs full steam (thus a player for a massively large portion of the market). Granted for a bit they will be using others (always have for some stuff) for some main launches now, but it is only until they right the fab ship and they have many ways to do that.
Acting like Intel is out of the game making 23.6B NET INCOME TTM is almost as bad as Ryan calling 1440p the enthusiast standard at 660ti launch...ROFLMAO. Go see the comments on that article to see how stupid his/j. Walton arguments were. Walton eventually resorted to name calling/personal attacks etc. I buried you guys with your OWN data...ROFL.
Oh, well, Intel's not a portal site here so...Yeah, I own the stock and wouldn't touch AMD with a 10ft pole if YOU were holding it. I said the slide was coming, we're 94 down to 77 now? A few more ~100mil Q's and people will take it back down to 30, and if they can't prove then a Billion/Q NET INCOME then they'll go way under that at some point.
That said, RAISE YOUR PRICES amd, so you can finally break 1B NET INCOME for a few quarters while owning some of the best cpus for years. If you don't break a billion in the next Q or two, you need to be bought, or CEO fired. NV just took back 9% share. Intel just had a record Q. You are doing nothing but hurting YOUR net income by not raising prices on very good product. Quit trying for cheap share, and start chasing RICH like NV/Intel. People buying parts under $250 on either side don't make you rich. Just stop consoles altogether and you'll have more R&D for stuff that makes more than 10-15% margins (consoles are made for $95-105 last gen, AMD made single digits for much of it, then mid teens, meaning 15% or less, or you'd say 16%). Consoles are why your cpu dropped out of the race for round1 (had to design 2 of those instead of cpus 7yrs ago or so) and gpu sucked all through the refresh etc. Timing is rough here, but you get the point.
They made a stupid bet on consoles dictating PC life, and well, NV said nope, and we listened to NV mostly :) You won't win with price if the other guy is kicking you perf wise. Richer people pay for perf, while the poor want that discount. That is the difference between an AMD Q report vs. Intel/NVDA. NV looks at possible margin and says, "consoles? ROFL. Whatever dude, I like making more on workstation/server and flagship desktops." Intel said the same and shafted celeron/pentium etc (poor people chips left 10% empty handed for ages) while moving wafers to high margin stuff (thus even losing on some sales, but still gaining revenue/income). Dump the cheap stuff when silicon is short (everywhere) and your enemy has good product. IE, fight only in stuff that makes highest margin(forget 8-15% crap like consoles - AMD said single digits early on, NOT ME).
I think what you are missing is we enjoy playing AMD on our consoles very much. Who cares how much money a company makes. The market has spoken and said if AMD makes a billion a year, it is still as valuable as Intel or Nvidia. It is possible that in 20 or 30 years AMD might make that $23B that year and then you will feel very silly for saying the stock is not worth $30. I like AMD because of the 486DX4-120. It was faster than the DX3-100. I have been a fan ever since. Also, I liked ATI cards very much. Nvidia liked Voodoo2 cards and bought them for pennies on the dollar. AMD cards might be noisier and slower but they still are good for all the last generation games before Nvidia came out with their cheat of ray tracing technology. I still have a 7800 adapter which is quite fast for a lot of games. Even Diablo #2 is quite nice on it.
So while you are talking money, I think John Carmack would approve of the AMD cards and processors of today. When AMD put 100 or 200 cores on a chip then people will know how serious they can be. Why would you want 8, 10, or even 16 ultra fast cores for computing, when you could have 200 cores to do more.
Also, I think it is good that AMD are putting people in Taiwan to work instead of always focusing their labor on Americans and Texans like they used to.
Holy crap, is AMDSuperFan TheJian's sockpuppet? They both make almost exactly as little sense as each other, and I can't imagine why anyone else would bother replying seriously 😂
So you just don't understand stocks or how to run a business...No comments on my data then..OK. Thanks for confirming my point, since you clearly can't debate it. As for commenting on his post. He likes to play consoles. So what does that do for AMD income? You two both don't get it.
https://www.macrotrends.net/stocks/charts/AMD/amd/... https://www.macrotrends.net/stocks/charts/AMD/amd/... 16 years of data, pull those up and have a stock chart of the last decade up on another monitor and you should get the point...If you don't get what I said, you're not too bright. It was simple talk. Easy math. Learn to debate: https://islamreigns.files.wordpress.com/2019/01/pa... It's comic where the link leads...But some can't use svg's which are everywhere else it seems. They're talking about YOU being at the bottom. Name calling. L1. Nice work. I'd block you, but anandtech is not smart enough to allow a checkbox for it ;)
So you're not a console fan companies are supposed to stop making them? Ok. You're a proud Intel shareholder hmmm how much has the stock grown in the last 8 years? If you're buying Intel stocks just because you support or like Intel, I hope you don't give or no one follows your stock advice.
What is pro Intel about telling AMD to start making money by raising prices? I spent most of the post explaining why they should stop making crap that has no profits and move to stuff that will make them rich. What part of that do you not get? I am practically begging them to make money so they and have more R&D, win, etc. The fact that I own Intel is just simple math. You are DUMB if you own AMD at this price, or just simply haven't even done some quick math.
https://islamreigns.files.wordpress.com/2019/01/pa... Learn to debate. You said nothing and worse, failed to understand the content of my post. Comic where that jpg is...SVG's of it are everywhere, but jpg a little tougher to find. You failed.
AMD isn't making enough on them, that is the point. If short on silicon make the highest margin stuff right? It's that simple. Ask Intel.
Don't care what they made in 8yrs. I've only owned then since the drop to 43 while still making the same income as a few weeks before at 70. I don't like ANY company; I like the money they make me though. Does that count? I BUY hardware from the winner, period, for what I need done most. I have all 3 in my PC's (Intel, AMD, and Nvidia...ROFL). I was an AMD reseller for 8yrs. I like the company, just hate the management. Lisa Su made 59mil last year, most of any CEO in S&P500 on earth. Her company made 660m, less than all others on the top ten list who make BILLIONS of NET INCOME. Intel made 23.6B NET INCOME, but last year Intel CEO made 66mil (on 23.6B!).
My points are about money here, which you don't seem to have a concept of, so I don't expect you to get the message. Move along. I've never been proud to own a stock (name means nothing). They're just things to make profit on. I didn't own Intel since $26 or so ages ago (~2006-2007, sold never came back until now). That said, if you're talking last 8yrs. Ok. Sept 2012 $22, today, $50. Not bad money, but if you sold recently you could have had $70. Very good money now for most. But with a price of ~45, I like my chances of $100 by Q1 2022 or before. I'll leave early of course. I was explaining why to buy or not buy stuff you just weren't listening. I gave links to 16 years of data. You can't read? It had nothing to do with being a FAN of company X, it was all about WHY AMD isn't (or IS in NV/Intel cases) making money that they should be based on the price of the stock, share of the market etc. This is stock advice. I don't see you debating any of it either. I see you making a fool of yourself.
But yeah, not a fan of what consoles are doing to AMD's NET INCOME and R&D, they should have passed like NV for the same reason as NV (robs from core R&D and no margin). Any silicon spent on a single digit to mid teens margin product (AMD said it) is WASTED and should be spent on higher end stuff for NET INCOME and REAL margins! See what Intel did. Short on silicon, they moved production to servers and HEDT. Screw celeron/poor people, not the rich. Without the rich to pay the bills you can't afford to support the poor ;) See 1/2 the country that doesn't pay a dime in taxes (is that a fair share?? LOL). I buy stocks so I don't have to care who makes my chip or what price it is, because it is FREE to me via the stock income. NV/AMD bought all my chips for years to come, though Intel will be buying next years probably (or MU as DDR5 etc kicks off and buy cycle with it, many others just do the homework)...ROFL
If you'r as objective as you claim to be, why do you care about the management of AMD? Why does it matter what the CEO's pay is to you? You buy hardware from a winner? Why not buy the best performant hardware that you need? Again, why does it matter what brand it is then? You're telling AMD might hit $30 soon or 100? Or is $100 the target for Intel in 2022? Console margins are 15+% from 2015 onwards. You seem to be pretty good with the links, surprised you missed this one. Definitely not the same as Ryzen, but you don't abandon a market overnight. And if you're the ONLY player you can charge more for it. Zen2 in PS5 and Xbox will not be sold at cost or at a minimal margin too, I'm sure you know. Intel still makes Celereons and dual core stuff and Wi-Fi cards and more, maybe you should tell them to move that onto higher margin stuff like server processors.
I love the idea that Nvidia isn't in consoles because they said "ROFL. Whatever dude", even though, you know, Nvidia is in the Switch - that most premium of consoles... and they tried selling their own console (the Shield) several times over... and the only reason they aren't doing business with Sony since the PS3 is because AMD could offer the full CPU/GPU package... and the only reason they aren't in the Xbox is because they tried to rip Microsoft off back with the OG Xbox.
But sure, it's because they're too cool for consoles (that they're still involved with). Seems legit. 😏
They did that to sell old chips stuck in inventory forever; nintendo probably got a great deal on stuff that at that at point was worth $0 to Nvidia. They didn't say whatever dude, they said no margin and robs from core R&D so we passed. The didn't pursue Nintendo, it's old crap that couldn't be sold to anyone else.
NV doesn't do poor stuff until forced, or they simply have nothing else to sell more of, get it? If I've tapped out the entire gpu market, making a mint, etc, then make poor stuff if you still have resources. IE, if NV is short on silicon they put out low models LAST (heck they pretty much always do it, smarter).
What kind of soc is in that premium console from nintendo? Hint, it's not 7nm in that first one. https://arstechnica.com/gaming/2016/12/nintendo-sw... Hybrid consoles use "LAST GEN TECH". So they didn't waste tons of R&D did they? :) They are also very small chips even with the new ones (the lite model has newer tech, old T4 was ~100mm^2). Xbox/ps4 were 400+. Those puny ones won't steal much from the 3000's right? They went to samsung, so this deal might have been all they could get out of TSMC at the time (apple was buying all 7nm, now moved to 5nm, amd/intel bought a bunch of the freed up stuff).
Selling your own console and a chip in others for $10-15ea is very different, and also brought their store for game sales (income off other's work). Not the same as 450mm^2 console chips for $10-15 each when those could be flagship cpu/gpus that make $100 or more. NV just ran out of cards in minutes. AMD claiming they won't. It would be a lot easier if you weren't wasting silicon on $15 items right? That size is a large AMD gpu not being sold for $500+. IE, 3070 is ~393mm^2. NV makes more than $15 on them. As the poor guy in semi, you should concentrate on INCOME, not units or share. This has nothing to do with being cool, it wasting R&D. It's about money, so yeah, legit. Your comment? Stupid. Intel screwed the lowend when short 10% silicon (couldn't fill about 10% of customers PC's), and moved to HEDT/server. You don't seem to understand the conversation or how these companies work.
Shield was an attempt again, to move old silicon and only cost 10mil to dev both shield TV and the handheld they said...That is a small price to pay to move old chips worth at least as much and collect some money on the store maybe, push streaming, etc etc. It was a small price to pay and a good move business wise at the time. They failed in mobile so tried to recover some of the wasted silicon collecting dust and push new streams of income while doing it. Good management. These end up in AMD writeoffs (see trinity etc IIRC, multiple apu junk).
They are no xbox/sony because they wanted MORE money. You are proving my point. AMD sold out cheap, NV wouldn't. Yeah, you're right. Thanks. They tried to MAKE MONEY, not RIP off MSFT. I wouldn't work for free either basically. :) 15%, in semi? ROFL. Only if I can't make more on something else ANYWHERE. Note xbox360 cost MSFT 3.5B or so, and Sony's lost ~4B...ROFL. Jury still out on ps4/xbox1 etc. AMD thought they might beat NV tech by being in a console and hoping games would aim at them, fixing their perf problems vs. NV gpus. It's not working. See 9% going to NV over TECH. RT+DLSS sells...OUT that is..In minutes.
Wake me when you actually have a data point and learn to debate. See Paul Graham's chart.
Yes, all fabs at full steam. Funny you forgot about 10nm and 7nm. "but it is only until they right the fab ship and they have many ways to do that." like they've been doing it for last 4 SkyLakes? Or was it 5? You're good at counting you will know that for sure. Come for a bebate when you actually know something about the process nodes and where they are. Play with historical numbers till then. That's what they are. Historical.
I am not sure how reliable that source is, but I heard Intel could not get double-digit yields on 10nm at first, then after making design less "innovative", but yields were okaying, performance was within margin of error from 14nm+++++++++++++
In practice, Ice Lake is roughly comparable to Comet Lake in everything but GPU performance; overall what it gained in IPC it lost in clock speed. This was *after* Intel had already relaxed their 10nm transistor density way below their initial claims of 67-100 million transistors per square millimetre.
They finally seem to have fixed that with Tiger Lake, but given the paper-launch nature of that release and their reluctance to discuss the 8-core variants, I'd be happy to surmise that either yields are still not great or they only have some fraction of their 10nm fab resources capable of manufacturing on the new "SuperFin" node variant.
Look honestly since intel didn't have completion why wouldn't you stay roughly on the same node it saves money and time somewhat you can argue that they should've been innovation but are people forgetting they own stock in amd to
Wow the amount of missing the point in the comments has reached epic proportions. AMD will never win. They are a failed company and a failure as a business. They have been around for 50 years and have never amounted to anything more than a side note. They continue to hold on to a failed business model (stealing x86 tech from Intel) with a death grip. As we speak, right now, Apple, Google, Nvidia, and many other companies are developing better, faster, more power efficient mobile CPUs with ARM cores and standard IPs on TSMC 3 and 2nm processes. ARM as a disruption is over. ARM is already wearing the yellow jersey on the road, and we just have a few days left in the race. AMD will be the first to fall, and Intel will be next. Both companies need to make serious strategic changes if they want to exist in 10 years.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
143 Comments
Back to Article
yeeeeman - Friday, September 25, 2020 - link
Mister swan needs to go somewhere else because he is pathetic. He is consuming engineers time with stupid things.shabby - Friday, September 25, 2020 - link
Let him run intel to the ground.xenol - Friday, September 25, 2020 - link
And then let AMD take over the x86 market? That's even worse.Drkrieger01 - Friday, September 25, 2020 - link
The tech market right now is an example of why we want competition between silicon designers/manufacturers. Intel is trying to pull up their socks to catch back up to AMD, and it's creating better product for the end users, and at better prices. I think we all forgot the glory days of when Intel and AMD were in good competition (Athlon 64 XII/Phenom days anyone?), and price to performance was very good. But that was over 10 years ago now...Lord of the Bored - Saturday, September 26, 2020 - link
Heck yeah. We remember the great Athlon Wars, and mourn for those lost along the way.Pour one out for poor Cyrix, everybody.
velanapontinha - Saturday, September 26, 2020 - link
I feel yafteoath64 - Monday, October 19, 2020 - link
Don"t forget Centuar as well as VIA!.AMDSuperFan - Friday, September 25, 2020 - link
AMD will have 200 cores soon. Imagine being able to do 200 things at once on your home computer. So what if Intel does 50 things faster, I will be able to do 200 things.FunBunny2 - Friday, September 25, 2020 - link
"I will be able to do 200 things."unless you design nucular bombs or hurricane predictions or auto-generate paperback detective tales... why?
Hifihedgehog - Friday, September 25, 2020 - link
Guess you didn't get the memo... Nuclear Bomb Simulator and Hurricane Simulator are coming to Steam in 2021!AMDSuperFan - Friday, September 25, 2020 - link
All that matters is being able to multitask on as many things as possible. If I run a docker instance, that is one thing. If I use Cortana, which I very much enjoy using for my personal assistance, then that is a second thing. AMD is very good at releasing chips that do lots of things at once. It doesn't matter if they do each thing a lot slower as long as they can do many things. I am looking forward to all the games that can use hundreds of cores at once. Maybe a good chess game? Then AMD will be the best at chess games.dotjaz - Friday, September 25, 2020 - link
You won't be able to buy any consumer/prosumer CPU with beyond 128 cores in the next 10 years. Mark my words.I doubt 96-core CPU would be available anytime soon.
AMDSuperFan - Saturday, September 26, 2020 - link
AMD have proven that at any time they can release processors with many more slower cores than any other company. 200 or 400 cores will be no problem. Would you buy a processor that ran everything perfectly that was 10 cores or would you buy one with 400 cores just to say you are the best at doing most things slowly, but many things very fast? PKZIP will be no problem for the 400 core AMD chip while the competitors will be chugging away. The 200 core games will be very good.wumpus - Monday, September 28, 2020 - link
Once Intel cancels Xe, expect them to try again with a "x86 GPU". Presumably something like 96 Atom cores with AVX512 or something.And it will work just as well as every other time they tried it (for values of "they" which also include Sony thinking at first that cell could replace a GPU).
Putting 96 cores on a chip is one thing. Building the network fabric to hold it together is another. And supplying the I/O is the worst part. The only way this is happening is if you could stack the CPU on top of all the RAM, but that has mostly been given up.
ilkhan - Saturday, September 26, 2020 - link
Those things don't need an entire core to themselves. Having 8-16 cores is plenty for even a power user's desktop, until you get in serious multi threaded workloads, A/V editing/rendering, for example.Samus - Saturday, September 26, 2020 - link
95% of the consumer market doesn't want "200 cores to do 200 things" they want adequate performance to do basic things with 12+ hours of battery life, small footprint and a lot power bill.dotjaz - Friday, September 25, 2020 - link
Keep hallucinating. There won't be a consumer 32-core CPU any time soon, even for Threadripper which is intended for creator/prosumers 128-core isn't on the horizon anytime soon, let alone 200+ cores.Memory bandwidth, thermal and cost constraints simply won't allow it.
inighthawki - Friday, September 25, 2020 - link
>> There won't be a consumer 32-core CPU any time soon, even for ThreadripperThe 3970x is literally a 32 core variant of Threadripper.
JlHADJOE - Tuesday, October 6, 2020 - link
lol read his post again and realize you're quoting him the same way journalists quote Trump.Before the comma:
>> There won't be a consumer 32-core CPU any time soon
After the comma:
>> even for Threadripper which is intended for creator/prosumers 128-core isn't on the horizon anytime soon
Sure the sentence is a bit run-on, but he's clearly making a distinction between consumer which I assume is Ryzen, and prosumer which is Threadripper.
Spunjji - Friday, September 25, 2020 - link
Please, SOMEBODY, take this Turing Test-failing account offline.rbarone69 - Sunday, September 27, 2020 - link
hahaha I thought the very exact same thing!MrSpadge - Friday, September 25, 2020 - link
Not sure, but I have a slight suspicion you may be joking here...Spunjji - Monday, September 28, 2020 - link
It's kind of agonising that people keep writing serious responses to this comment section's most obvious troll.supdawgwtfd - Monday, October 12, 2020 - link
Why do you think he is still here?He gets hits. Hits means and revenue. Which of course means money.
Anandtech's owners love him.
Samus - Saturday, September 26, 2020 - link
You'd be crazy to think either of these companies (AMD or Intel) will exist in their current forms in just a few years. x86 needs to die. Any legacy requirements can be appropriately emulated using virtual machines or code morphing on alternative architectures.The disturbing part is AMD and Intel are seemingly in denial over this because neither have any ARM or alternative IP-designs in their pipeline for the next 5 years. Seriously in 5 years ARM will have 50% of the market and you can bet that'll shift other consumer segments currently dominated by x86 (such as game consoles)
Showtime - Saturday, September 26, 2020 - link
Emulators aren't the most efficient way of doing things, but it's hard to deny where computing seems to be heading. Neither will abandon it until they have to, and even then there will be a market to support x86 for decades after.Spunjji - Monday, September 28, 2020 - link
What form do you think they'll exist in, and why?What's your reason for predicting a 50% shift to ARM, and in which market?
mdriftmeyer - Tuesday, September 29, 2020 - link
ARM will be the Albatross around Nvidia's throat.grant3 - Friday, October 2, 2020 - link
"x86 needs to die." Why?Serious question. I've been hearing this same line for over 20 years.
But still haven't heard any reason that actually solves a practical problem.
It's always some variant of "it's old" "it requires legacy support" "intel is evil" "RISC is faster" yadda yadda. Basically it's complaining about imaginary problems that engineers have mitigated into oblivion years ago, or an idealogical conviction that immature/imaginary technologies will magically avoid the compromise and tradeoffs that every mature technology has endured.
"ARM will have 50% of the market"
50% of which market? There's already billions of ARM devices out there in the wild. they're wildly popular. I wouldn't be surprised if there's more ARM cpus active in the world than there are x86 cpus active in the world.
ARM is proving itself successful, x86 is remaining popular, and technology solutions are better than ever despite (or because of) this current coexistence.
"Any legacy requirements can be appropriately emulated using virtual machines or code morphing on alternative architectures."
This has been the mating call of x86 replacement advocates for decades, including the inventor of x86 itself back in the 90s, and none of them proved economical. So while you may be right this time, I'm not going to bet on you being more prescient than the thousands of engineers who were funded by billions of dollar across numerous projects in the past.
Whenever x86 sunsets, whatever new software is written at the time will coincidentally be written to support its replacement architecture.
throAU - Monday, September 28, 2020 - link
Nah, ARM is coming. Apple are leading the way but the others will jump ship pretty rapidly when it is shown that no, you don't need to pay both the x86 monetary and technical debt tax to be competitive.RobJoy - Friday, October 23, 2020 - link
OS developers as well as all other major software firms are slowly moving away from x86 altogether.We are witnessing a beginning to the slow death of x86.
It may take 5 or 10 years, but it is starting now.
adt6247 - Friday, September 25, 2020 - link
Couldn't disagree more. I want Intel to be competitive to AMD. I think part of the problem with the last decade is that Intel was complacent, because AMD was no threat until Ryzen. I don't want AMD to get complacent either.shabby - Friday, September 25, 2020 - link
Amd isn't run by a pencil pusher.Drumsticks - Friday, September 25, 2020 - link
Krzanich was an engineer with decades of experience, wasn't he? Jen Hsen has a technical background, and Nvidia's advances are still far better than what Intel did over a similar timeframe, but it certainly appears true that AMD is managing to put out their most competitive salvo in years with Nvidia in years. Turing was a stumble, and Ampere is great, but RDNA2 looks to be making up a lot of lost ground.I don't think having a tech background is the only thing that determines success, although it often seems like it won't hurt. I agree with most other posters - having two competitors will definitely be better than having one.
Hifihedgehog - Friday, September 25, 2020 - link
"Krzanich was an engineer with decades of experience, wasn't he?"He was an undergraduate chemistry major that started as a process engineer at Intel. To my knowledge, electrical engineering was never in his wheelhouse and he followed a more conventional managerial path in rising in the ranks. He is most certainly not at the intellectual level of whizzes like Lisa Su whose corpus of doctoral texts on electrical engineering are par excellence.
michael2k - Friday, September 25, 2020 - link
I'm not sure what you mean by Turing was a stumble? You mean because it didn't achieve the same level of performance gain as Pascal? The problem is that these designs don't exist in a vacuum. NVIDIA chose to focus on RTX and ML that generation, which is paying off because it's now a second generation technology while AMD is only now going to offer it as a first generation technology next year, which means NVIDIA might be on their third generation when AMD is still on their first.AMDSuperFan - Friday, September 25, 2020 - link
Don't forget Big Navi which will soundly beat the Nvidia Voodoo2 technology.JKflipflop98 - Friday, September 25, 2020 - link
Big Navi is going to sit in a corner and beg NV for a few scaps of market share. Just like every other video card AMD has ever made.Spunjji - Friday, September 25, 2020 - link
@michael2k - all signs still indicate that Big Navi should be here this year.Luminar - Saturday, September 26, 2020 - link
Big Navi will be here this year you say? Perfect. Another calamity in 2020!michael2k - Saturday, September 26, 2020 - link
And it will be competing against the 2nd generation 3080.Spunjji - Monday, September 28, 2020 - link
2nd generation 3080? What's that supposed to be? If you're talking about the variant with 20GB RAM, that's hardly "second gen" and people are going to be very disappointed when they realise how pointless it is to have so much VRAM at this point in time.Overall I'm loving the idea that we shouldn't believe RDNA 2 will release this year despite the evidence to the contrary, and that it'll be terrible despite evidence to the contrary, but that we should believe there will be a "2nd generation 3080" despite there being *no* evidence in favour of such an assertion. Classic FUD.
wumpus - Monday, September 28, 2020 - link
It was a mistake for most consumers to buy Turing. Jensen even admitted that with the "it's safe to upgrade from Pascal now" bit.No idea if "Vega VII" was a mistake for AMD, but it was what very few consumers needed.
Operandi - Friday, September 25, 2020 - link
I think the best thing for the market is for Intel to take a bit more of a beating. Intel was complacent and/or they had run of real engineering problems (probably a bit of both) and AMD came back with a great CPU design but AMD's market position is still pretty weak. Intel still pushes the OEMs around in the server and notebook market. Its apparent in the product stacks but what is less apparent is whatever deals are happening behind closed doors."Cutting deals" of varying degrees of legitness and shadiness are always going to exist but the asymmetrical nature of AMD and Intel's position in the market in terms of capacity and capital make the market inherently unhealthy. It dosn't need to be 50/50 market share but I think Intel still needs to be taken down a peg or two so AMD can position itself (and the market along with) for the long term.
Spunjji - Friday, September 25, 2020 - link
Have to agree. I see Intel voluntarily bloodletting and find it hard to see that as anything other than overdue consequences for their corporate mentality.wumpus - Monday, September 28, 2020 - link
I'm not sure how much Intel can push OEMs around in the server room after AMD could supply the chips and Intel couldn't. Also the really big boys (AWS/Google/MS) don't need the OEMs, and AMD is getting in there.It looks like their old tricks are working just fine in the notebook market, although this is the first generation that AMD has really tried to crack it. OEM's might just think it isn't worth annoying Intel until AMD can get a few generations shipping.
wolfesteinabhi - Friday, September 25, 2020 - link
we can now officially say "14nm" instead of +ThumbsUp+ here on Anandtech now. .... who needs an outdated like icon anyhow!!Hifihedgehog - Friday, September 25, 2020 - link
14nmsharathc - Friday, September 25, 2020 - link
Intel is demeaning the symbol of '+'jbwhite1999 - Friday, September 25, 2020 - link
25 years ago this fall, Intel had a problem with multiplication of large numbers, so in essence, they have experience demeaning math like "+"FunBunny2 - Friday, September 25, 2020 - link
"Intel had a problem with multiplication"have they gone back to the future: mult by add in a do-loop? :)
Hulk - Friday, September 25, 2020 - link
Multiplication is really just recursive addition so this makes sense.sharathc - Friday, September 25, 2020 - link
A typical meeting talks at Intel:Engr 1: We have some improvements planned for 14++++++
Engr 2: Aaaaaaaaam. That should be 14+++++++
Manager 1: I lost track. How many plusses guys?
Engr 1: That's 7 + sir.
Engr 2: Did you say 7? (whaaaaa! he said 6 +)
Architect: Let us dump 6 and 7 Plusses. From tomorrow, I have 14+++++++++.
dotjaz - Friday, September 25, 2020 - link
Except none of those + nodes actually shrink feature size or add a dimension, therefore changing the number makes zero sense even as a joke.dwbogardus - Friday, September 25, 2020 - link
None of us are privy to exactly what each of those incremental refinements were, but they did gradually result in higher performance and/or improved yields, both of which are worthy achievements. Even though Intel's 10 nm started out poor, if they are equally persistent in their refinements, it will end up performing very well. But by then, the competition will have moved on from 7 nm to 5 nm. Intel needs to pick up the pace. How does TSMC do it so well, and so quickly?xenol - Friday, September 25, 2020 - link
Process node names, at least the number, have lost their meaning anyway.Teckk - Friday, September 25, 2020 - link
Nice article!Ian, wait until you have a Super Enhanced Enhanced SuperFin nodes. That'll be fun.
Probably the first table where I've seen a 10+ as original followed by 10 without the + as the NEXT gen !
RSAUser - Saturday, September 26, 2020 - link
Super Enhanced Enhanced SuperFin++*drexnx - Friday, September 25, 2020 - link
should have just called it 10.0, 10.1, 10.2, etc. or 10r1, 10r2, 10r3, etc.the problem with the plusses wasn't the numerical incrementing, it was that past 3 of them it gets hard to parse the exact number quickly since they're repeating shapes, in addition to the "++ = +1" thing around any computer topic.
Flunk - Friday, September 25, 2020 - link
Should just call it what it is, 14nm. the +s mean nothing.drexnx - Friday, September 25, 2020 - link
except that's categorically untrue. Each revision of the 14nm process has improved upon the previous, notably so.look at how poorly the initial Broadwell clocked and scaled (they didn't even release mainstream performance 5xxx chips!) vs. Cannon Lake and tell me they're the same process.
drexnx - Friday, September 25, 2020 - link
er, vs. Rocket lake not Cannon lake.(too many lakes...)
AMDSuperFan - Friday, September 25, 2020 - link
Where can I compare the Rocket Lake? Maybe you have the insider know how?Smell This - Friday, September 25, 2020 - link
Kaby 14nm 'backed-up' from the original design, and Chipzillah dumped tock-tick for **PAO**
When you say, "Each revision of the 14nm process has improved upon the previous" ... that would be incorrect.
dotjaz - Friday, September 25, 2020 - link
Except you can wrong and it does mean something. They have different design rules FFS. You can't just call it 14nm when in reality you must redesign and tape out again.dotjaz - Friday, September 25, 2020 - link
*areSpunjji - Friday, September 25, 2020 - link
Agreed. 10a, 10b etc. would also work.jamesindevon - Friday, September 25, 2020 - link
Ian, you are being way too polite to Intel.I don't blame you: you want to keep your contacts fairly happy. But Intel have now reached a point where customers just can't tell what they're buying.
It's not just the process names. It's not just the crazy numbering scheme, although if it wasn't for ark.intel.com, it would be.
We've long had the situation where laptop OEMs can change the effective performance of a processor by changing the effective TDP, but with Tiger Lake Intel have been taking things way further. We're now at the position where it is effectively impossible to choose between laptops based on performance, because you just can't tell how the processor will perform. The OEMs won't tell you how they've configured their laptops beyond a processor model number.
I, for one, am giving up on Intel marketing. If I want performance I can be fairly sure about, it looks like I have to buy AMD (and they're not as consistent as they should be, due to different laptop thermals. They don't have the option there: Intel do, but aren't taking it.)
Jerry Pournelle once wrote that "years ago when AT&T tried to market PC's I said that if they bought Colonel Sanders they'd advertise hot dead chicken." Intel seem to have forgotten that dig.
ikjadoon - Friday, September 25, 2020 - link
>We're now at the position where it is effectively impossible to choose between laptops based on performance, because you just can't tell how the processor will perform.Intel's misleading and backhanded TDP tactics aside: "the CPU model doesn't tell you the performance" has been true since Kaby Lake R, when Intel first moved to quad-cores.
Look at the massive score variation between i5-8250U CPUs: https://www.notebookcheck.net/Intel-Core-i5-8250U-...
There is genuinely no "one benchmark score". These might as well be different process nodes and in wildly different places in the product stack. It's impossible to know what power limits have been chosen.
Ever since laptops became thermally constrained, every notebook's TDP (PL1) / PL2 / Tau and thus performance is completely configurable and not standardized.
This blame also should go back to laptop manufacturers, as well, who damn well know the TDP (PL1) / PL2 / Tau that they explicitly programmed and designed around.
HP picks a 35 W PL2. Dell picks a 50 W PL2. Acer's Efficiency mode, if chosen, sets a 25 W PL2.
In the end, I hope AMD sticks it to Intel by demanding AMD OEM manufacturers to clearly label their long-term and short power limits. Nothing but competition will force Intel to change.
Spunjji - Friday, September 25, 2020 - link
This. 👍velanapontinha - Saturday, September 26, 2020 - link
This!lilo777 - Friday, September 25, 2020 - link
It is stupid to criticize Intel for delivering a microprocessor that offers OEMs a flexibility to optimize the parameters for different use cases. If you want to know the performance, ask the OEMs (or don't be lazy and read the reviews).Spunjji - Friday, September 25, 2020 - link
Not stupid at all. It's entirely within their power to enforce certain design parameters. It's to their benefit not to, though - this way they can upsell higher-performing CPUs to unwitting consumers more easily.lilo777 - Friday, September 25, 2020 - link
Enforce what? Don't be ridiculous. Besides, system performance depends on many factors, not just CPU anyway. Whoever is buying a computer based solely on the CPU SKU deserves whatever they are getting anyways.Spunjji - Monday, September 28, 2020 - link
"Enforce what" - minimum performance / cooling standards? It's not that hard, Intel already have a bunch of standards that OEMs must adhere to. No validation, no sticker, done.Here's a bizarre idea - you could even have separate model numbers for OEMs to use depending on the performance their implementation is capable of. To pluck an idea out of thin air, you could add a U on the end for the ultra-low-power and add an H for high-performance. 😏
The rest of your post is just waffle. System performance depending on many factors doesn't mean it's okay to sell CPUs that perform differently under the same damned product name. Saying "it's up to the customer" doesn't absolve Intel of that deception in the first place. Next you'll be trying to sell me on your "multi-level marketing" scheme, caveat emptor.
Everett F Sargent - Friday, September 25, 2020 - link
Intel® Core™ i7-4770K Processor (4th Generation)https://ark.intel.com/content/www/us/en/ark/produc...
Q2'13, 22+ nm
Intel® Core™ i7-4790K Processor (4th Generation)
https://ark.intel.com/content/www/us/en/ark/produc...
Q2'14, 22++ nm
Intel® Core™ i7-5775C Processor (5th Generation)
https://ark.intel.com/content/www/us/en/ark/produc...
Q2'15, 14 nm
Intel® Core™ i7-6700K Processor (6th Generation)
https://ark.intel.com/content/www/us/en/ark/produc...
Q3'15, 14+ nm (could not readily get one ubtil Q1'16, they sold the i7-4790k during that holiday season)
Intel® Core™ i7-7700K Processor (7th Generation)
https://ark.intel.com/content/www/us/en/ark/produc...
Q1'17, 14++ nm
Intel® Core™ i7-8700K Processor (8th Generation)
https://ark.intel.com/content/www/us/en/ark/produc...
Q4'17, 14+++ nm (ditto holiday season availability as for the 6th Generation)
Intel® Core™ i9-9900K Processor (9th Generation)
https://ark.intel.com/content/www/us/en/ark/produc...
Q4'18, 14++++ nm (ditto holiday season availability as for the 6th/8th Generations)
Intel® Core™ i9-10900K Processor (10th Generation)
https://ark.intel.com/content/www/us/en/ark/produc...
Q2'20, 14+++++ nm
Intel® Core™ i9-11666K Processor (11th Generation aka Rocket Lake)
https://ark.intel.com/content/www/us/en/ark/produc...
Q2'21 (tbd), 14++++++ nm
Intel® Core™ i9-12666K Processor (12th Generation aka Alder Lake)
https://ark.intel.com/content/www/us/en/ark/produc...
Q4'21 (tbd), 10 nm (ditto holiday season availability as for the 6th/8th Generations, widely available Q2'22)
Seven generations of 14 mn in seven years (on average) aka a one year cadance. Seven 14 mn generations in seven years! :(
Ahsan Qureshi - Friday, September 25, 2020 - link
5775C is 14nm, not 22nm++. 4770K and 4790K are fabricated on the same 22nm process. There is no 22nm+ or 22nm++..Everett F Sargent - Friday, September 25, 2020 - link
Process size follows generation in all cases, so 14 nm comes after 5775C. My nomenclature is very simple, each generation starts with no "+" subsequent nodes are appended with a "+" for each subsequent generation, so that 14++++++ is the 7th generation at 14 nm.If Intel can play the name game to hide their seven year stall at 14 nm and now at what appears to be 10 nm, perhaps Intel should just drop the XX nm altogether, go with 7XL for Rocket Lake and L for Alder Lake, when they get to 7 nm it would be M and 5 nm would be S.
Face it, Intel has so scotched up all their processor names and node names to date, Bronze, Silver, Gold and Platinum, G, L, X, K, KF, R, MX, XM, T, F, H, KFA and TKFC (Totally Krispy Fried Cooker). Intel has more processor and node names then they do processors!
Fulljack - Friday, September 25, 2020 - link
not true, both Broadwell (5775C) and Skylake (6700K) are built on the same 14nm process—probably a "slightly" optimized one, but Broadwell is based on Haswell that is process node shrink (Tick) while Skylake is based on new architecture (Tock).Spunjji - Friday, September 25, 2020 - link
Incorrect. Broadwell was the "first crack" at 14nm and Skylake's 14nm variant had improved characteristics on the process front, as well as a newer architecture. Yields for Broadwell were not as good, and it needed higher voltages, while high clocks were not as easily attainable as on a similar core design at 22nm (Hardwell). The process improvements ameliorated those issues.dotjaz - Friday, September 25, 2020 - link
You are so delusional. They don't have that many nodes. Process improves all the time, but some improvements require no change on the design front, some requires a simple re-spin, some require design changes to take advantage of design rule changes.The ones requiring no re-spin is not a new node at all, because those are generally counted as yield improvements.
Spunjji - Friday, September 25, 2020 - link
Most of these improvements were already acknowledged by Intel, though not always named. The delusion is yours.drexnx - Friday, September 25, 2020 - link
Devil's Canyon wasn't a process change, it was a TIM change and package cap changeSkylake was just 14
Coffee and Coffee refresh are both 14++
Spunjji - Friday, September 25, 2020 - link
Pretty sure they rolled in unannounced changes, too. Check the voltage requirements / binning. They used to do that quietly all the time back then.RSAUser - Saturday, September 26, 2020 - link
I just realized I have never owned a 14nm product till end of 2019 as a laptop, i7 950, i5 4460, i7 4720HQ, i7 4790, and those lasted till I got this i7 9750H as AMD 4000's hadn't been released yet, desktops are 3600X and 3900X.Kind of crazy how well those CPU's stood the test of time, since I remember before my 950 upgrading about every 1.5 years.
szabikopy - Friday, September 25, 2020 - link
Is the table on the 3rd page 100% legit?I’m asking because I’m not sure if intel mentioned that Alder lake will be manufactured on 10ESF.
If they did (and I just didn’t capture) then I’m more than a happy person. I just hope the next gen golden cove + 10 ESF will be great
Ahsan Qureshi - Friday, September 25, 2020 - link
Yes. Alder Lake, Sapphire Rapids and Xe-HP will be manufactured on 10nmESF.Machinus - Friday, September 25, 2020 - link
Ok, but when will they actually sell desktop chips? 10nm++++, according to your chart?Intel has become a broken clock. Tick-tok-tok-tok-tok...
dotjaz - Friday, September 25, 2020 - link
But the original article about Atom still says 10SF. If it's indeed 10nm Mark II or 10+ as originally named, then where did the 10++ come from?soaringrocks - Friday, September 25, 2020 - link
The problem is that line width of 14nm, 10nm, 7nm, etc. is a lousy way to describe Fab processes. It's worse than Apples vs. Oranges, it's more like Apples vs. Orangutans. Seeing people get overly hyped on process steps is just plain stupid. Look at the benchmarks for the apps you care about, that's all that really matters.lilo777 - Friday, September 25, 2020 - link
I totally agree. People obsessed with process names should check this article - https://hexus.net/tech/news/cpu/145645-intel-14nm-...It shows that transistor density of Intel 14nm+++ is close to that of AMD/TSMC 7nm.
Spunjji - Friday, September 25, 2020 - link
It really isn't when you compare the whole chip - they appear to have compared some of the worst structures for scaling in that article.Nobody's really obsessed with names - they just serve a useful purpose for discussing differences.
lilo777 - Friday, September 25, 2020 - link
They are not. After advent of finfet (and maybe even before it) process names do not carry any useful information about the merits of the process. It's just a name of the menu item in foundry's catalog. If they wanted to, Intel could name their next process 1nm. They won't. Nobody cares.Spunjji - Monday, September 28, 2020 - link
Process names stopped relating to most structure sizes way before FinFet, and I'm well aware that the name itself - on its own - doesn't convey useful information about the process. What they do convey is which process came after which for a given foundry, they imply significant difference such as a decrease in average feature size, and sometimes they convey a general idea of which industry generation the process belongs to. what they don't tell you is whose foundry produces smaller and/or more performant transistors, but as I said in the first place, they're useful as a simple reference point for discussion; you can't easily discuss something that doesn't have a name.Intel won't name their next generation 1nm because it wouldn't be the next logical step after their current generation. You're literally proving yourself wrong by pointing out that they won't do that.
You also totally skipped past copping to the falsehood that 14nm+++ is "close to" TSMC 7nm. 14nm++ is around 37.22 MTr/mm² (Source: https://en.wikichip.org/wiki/mtr-mm%C2%B2 ) while Renoir on 7nm measures in at about 63 MTr/mm².
FullmetalTitan - Saturday, September 26, 2020 - link
That TEM cut looks an awful lot like SRAM block, which has notoriously poor scaling with design node. SRAM cells from 28nm generations are not double the size of 14nm SRAM cells, more like 20% larger.However as you point out in your next comment, node naming stopped being a useful metric, somewhere between 65nm and 45nm. The industry stuck with it based on IEEE roadmap of full-node step naming, but it doesn't remotely align with the reality. The old standard was full width half pitch, since the center to center distance between gates was approximately aligned with the channel width. Intel led the industry departure from that standard, instead measuring the effective gate length. In later generations of planar nodes (45nm and smaller) gates were packed densely enough that FWHP was not accurately representing gate width anymore, it was SMALLER than the gate dimension. With FINFETs that actually went a bit far in the other direction since the control surface width of a fin is not the same as a planar device, since the gate wraps over the fin and the surface area is the important factor in design and operation. For the 14/16nm generation of devices, the minimum feature size of interest was actually 6-7nm for Intel, TSMC, and Samsung nodes.
Spunjji - Monday, September 28, 2020 - link
Yup, you nailed it on that one. False conclusion (similar density) drawn from incomplete information (only one type of transistor measured).RSAUser - Saturday, September 26, 2020 - link
These are all from "IC Knowledge LCC" as posted by electronics weekly.Transistor density (MTx/mm2)
Intel 10nm: 106
Samsung 5LPE: 133.56
TSMC 5FF: 185.46
Then from other places, this is Wikipedia, but following links to check that they're right:
TSMC N7FF (First generation 7nm): 96.5
So TSMC's first 7nm generation is nearly as dense as Intel's first 10nm, interesting. Meanwhile TSMC's N7FF+ is 114, Apple's A13 chip is built on this.
Their 5nm node is supposedly 186, it's used for Apple's A14 chip.
Spunjji - Monday, September 28, 2020 - link
That number for Intel 10nm isn't accurate - it was their original target and it hasn't been reached in practice. It looks like they had to relax a lot of their design rules to make the process yield well.Anandtech's quoted number for density with Lakefield is 49.4 MTr/mm², while apparently Tiger Lake is closer to 40 MTr/mm².
Bagheera - Tuesday, March 9, 2021 - link
you should check this article:https://semiwiki.com/semiconductor-services/ic-kno...
it's by someone who actually know we what they are taking about.
Ian Cutress - Friday, September 25, 2020 - link
Billions of dollars go into process steps. Talking about next gen processors isn't all about performance - it's the industry of the hardware that goes into building machines to enable those processes. Hundreds of thousands of jobs, supply chains, the works. So yes, we do care about process node technology. A whole friggin lot.I once heard from an Intel exec that told a bunch of press who started asking about 10nm that 'process node doesn't matter'. I came down on him like a ton of bricks. I haven't seen him speak to the press since. I hope it's not you.
eastcoast_pete - Saturday, September 26, 2020 - link
Intel used to be run and lead by real subject matter experts - people with Ph.D.s or Master degrees in chemistry, physics or material sciences. I wonder which business school that executive got his degree from, but he embodies the approach that subject matter Know-how isn't really required to run a place. Unfortunately, the destination is then often "the ditch".FunBunny2 - Wednesday, October 7, 2020 - link
"business school that executive got his degree from, but he embodies the approach that subject matter Know-how isn't really required to run a place."the mantra of the typical, i.e. Ivy League, MBA - "Managing is a separable skill, can be applied to any business, and we are the masters of managing." that's why most American business has been shifted out-of-country: it saves a few pennies per unit. China didn't steal American jobs, the MBAs gave them away (Nixon "opened" China for the cheap labour, not its 'consumer' market which didn't exist), since it was no skin off their noses. in due time, only MBAs will have moolah to buy all that Chinese made stuff.
Spunjji - Friday, September 25, 2020 - link
Thanks for this article. It's good to see it all laid out.six_tymes - Friday, September 25, 2020 - link
on one hand I get it, you are venting and frustrated like so many others. on the other hand, get a life. you are making a mountain out of a molehill. They have already recognized the naming scheme issue, and as you said its a big company and it takes time to correct and do better, they are in process of doing so, drop it already and MOVE ON.Ian Cutress - Saturday, September 26, 2020 - link
Get a life? What? You're not privy to the dozens of emails going back and forth about what products are what process nodes because they keep being corrected and recorrected. I've had financial analysts, those who follow this stuff but perhaps not to the detail we do, reach out and say that this article makes it a lot easier to understand the what and the why. So yeah, get a life. Sure thing bud.Spunjji - Monday, September 28, 2020 - link
"MOVER ON" is the favourite slogan of the gaslighter. It's getting quite funny seeing the overlap between Intel defenders and Brexit shills.eastcoast_pete - Friday, September 25, 2020 - link
Now the "+++++" era is over, get ready for "10 nm super-duper fin" and "10 nm hyperfin"; "10 nm ultimate fin" is also a distinct possibility.Spunjji - Monday, September 28, 2020 - link
M-M-M-M-M-M-MONSTER FINFin
fin
ilkhan - Friday, September 25, 2020 - link
Y'all are aware nvidia announced some new gpus, right?Rudde - Saturday, September 26, 2020 - link
California wildfires happened and the gpu reviews were delayed.Luminar - Saturday, September 26, 2020 - link
The wildfires were a result of AMD sabotage. Someone tried benchmarking his FX-8350 and R9 290x build.Spunjji - Monday, September 28, 2020 - link
😂eastcoast_pete - Saturday, September 26, 2020 - link
I hope Ryan is okay (health- and otherwise)! I actually got a bit concerned - this launch is a classic "Ryan does a deep dive review" moment. Or, did you guys at AT get on Jensen's sh#"list so they wouldn't send you review samples? But if, I can't see why that would be!Spunjji - Monday, September 28, 2020 - link
Yeah, I too am sad that we haven't received his insight on Ampere, as a lot of the content put out by other sites has left me wanting (and don't get me started on the YouTube soft-serve junk).On the flip side, I'm even more upset about how much of CA is on fire. 😫
watersb - Saturday, September 26, 2020 - link
Focus on the process node as a nominal feature size is over. Why can't we just let it go?This Tiger Lake bit, with SuperFin, would not be the same without that new capacitor design for the metal layer. A stack of materials, each layer on the order of three Ångstroms, that's nuts.
As long as we are fetishizing the light source wavelength or whatever, let's talk about the level of complexity that must be addressed.
So maybe Intel will not discuss design rules or validation protocols; that's intellectual property that they rely upon every bit as much as the frickin' ASML laser beams. Okay, you can't get them to comment, or provide slides, so not much to write about.
But we might at least entertain the notion that a godlike, perfect nano bot might well assemble some device at a 14nm scale that far exceeds what is considered possible in 2020.
davide445 - Saturday, September 26, 2020 - link
So much appreciating the inquisitive and specialistic work you are doing in Anandtech. The safe heaven I can always look at for objective unbiased analysis. Waiting for your Ampere GPU review.dontlistentome - Saturday, September 26, 2020 - link
Windows NT when it launched could support x86, DEC Alpha, MIPS and other CPU architectures transparently.If only they'd stuck with that - we'd have ARM laptops and desktops as a matter of course now, and Intel/AMD would have been a whole step forward than where they are with proper competition between competing instruction sets.
Alaa - Saturday, September 26, 2020 - link
Bloody hell!TheJian - Saturday, September 26, 2020 - link
"Intel ever wants to become a foundry player again."Funny, I thought they were basically running all fabs full steam (thus a player for a massively large portion of the market). Granted for a bit they will be using others (always have for some stuff) for some main launches now, but it is only until they right the fab ship and they have many ways to do that.
Acting like Intel is out of the game making 23.6B NET INCOME TTM is almost as bad as Ryan calling 1440p the enthusiast standard at 660ti launch...ROFLMAO. Go see the comments on that article to see how stupid his/j. Walton arguments were. Walton eventually resorted to name calling/personal attacks etc. I buried you guys with your OWN data...ROFL.
Oh, well, Intel's not a portal site here so...Yeah, I own the stock and wouldn't touch AMD with a 10ft pole if YOU were holding it. I said the slide was coming, we're 94 down to 77 now? A few more ~100mil Q's and people will take it back down to 30, and if they can't prove then a Billion/Q NET INCOME then they'll go way under that at some point.
That said, RAISE YOUR PRICES amd, so you can finally break 1B NET INCOME for a few quarters while owning some of the best cpus for years. If you don't break a billion in the next Q or two, you need to be bought, or CEO fired. NV just took back 9% share. Intel just had a record Q. You are doing nothing but hurting YOUR net income by not raising prices on very good product. Quit trying for cheap share, and start chasing RICH like NV/Intel. People buying parts under $250 on either side don't make you rich. Just stop consoles altogether and you'll have more R&D for stuff that makes more than 10-15% margins (consoles are made for $95-105 last gen, AMD made single digits for much of it, then mid teens, meaning 15% or less, or you'd say 16%). Consoles are why your cpu dropped out of the race for round1 (had to design 2 of those instead of cpus 7yrs ago or so) and gpu sucked all through the refresh etc. Timing is rough here, but you get the point.
They made a stupid bet on consoles dictating PC life, and well, NV said nope, and we listened to NV mostly :) You won't win with price if the other guy is kicking you perf wise. Richer people pay for perf, while the poor want that discount. That is the difference between an AMD Q report vs. Intel/NVDA. NV looks at possible margin and says, "consoles? ROFL. Whatever dude, I like making more on workstation/server and flagship desktops." Intel said the same and shafted celeron/pentium etc (poor people chips left 10% empty handed for ages) while moving wafers to high margin stuff (thus even losing on some sales, but still gaining revenue/income). Dump the cheap stuff when silicon is short (everywhere) and your enemy has good product. IE, fight only in stuff that makes highest margin(forget 8-15% crap like consoles - AMD said single digits early on, NOT ME).
AMDSuperFan - Saturday, September 26, 2020 - link
I think what you are missing is we enjoy playing AMD on our consoles very much. Who cares how much money a company makes. The market has spoken and said if AMD makes a billion a year, it is still as valuable as Intel or Nvidia. It is possible that in 20 or 30 years AMD might make that $23B that year and then you will feel very silly for saying the stock is not worth $30. I like AMD because of the 486DX4-120. It was faster than the DX3-100. I have been a fan ever since. Also, I liked ATI cards very much. Nvidia liked Voodoo2 cards and bought them for pennies on the dollar. AMD cards might be noisier and slower but they still are good for all the last generation games before Nvidia came out with their cheat of ray tracing technology. I still have a 7800 adapter which is quite fast for a lot of games. Even Diablo #2 is quite nice on it.So while you are talking money, I think John Carmack would approve of the AMD cards and processors of today. When AMD put 100 or 200 cores on a chip then people will know how serious they can be. Why would you want 8, 10, or even 16 ultra fast cores for computing, when you could have 200 cores to do more.
Also, I think it is good that AMD are putting people in Taiwan to work instead of always focusing their labor on Americans and Texans like they used to.
Spunjji - Monday, September 28, 2020 - link
Holy crap, is AMDSuperFan TheJian's sockpuppet? They both make almost exactly as little sense as each other, and I can't imagine why anyone else would bother replying seriously 😂TheJian - Tuesday, September 29, 2020 - link
So you just don't understand stocks or how to run a business...No comments on my data then..OK. Thanks for confirming my point, since you clearly can't debate it.As for commenting on his post. He likes to play consoles. So what does that do for AMD income? You two both don't get it.
https://www.macrotrends.net/stocks/charts/AMD/amd/...
https://www.macrotrends.net/stocks/charts/AMD/amd/...
16 years of data, pull those up and have a stock chart of the last decade up on another monitor and you should get the point...If you don't get what I said, you're not too bright. It was simple talk. Easy math. Learn to debate:
https://islamreigns.files.wordpress.com/2019/01/pa...
It's comic where the link leads...But some can't use svg's which are everywhere else it seems. They're talking about YOU being at the bottom. Name calling. L1. Nice work. I'd block you, but anandtech is not smart enough to allow a checkbox for it ;)
Teckk - Sunday, September 27, 2020 - link
So you're not a console fan companies are supposed to stop making them? Ok.You're a proud Intel shareholder hmmm how much has the stock grown in the last 8 years?
If you're buying Intel stocks just because you support or like Intel, I hope you don't give or no one follows your stock advice.
Qasar - Sunday, September 27, 2020 - link
na, that's just the Jian's usual pro intel, rant.TheJian - Wednesday, September 30, 2020 - link
What is pro Intel about telling AMD to start making money by raising prices? I spent most of the post explaining why they should stop making crap that has no profits and move to stuff that will make them rich. What part of that do you not get? I am practically begging them to make money so they and have more R&D, win, etc. The fact that I own Intel is just simple math. You are DUMB if you own AMD at this price, or just simply haven't even done some quick math.https://islamreigns.files.wordpress.com/2019/01/pa...
Learn to debate. You said nothing and worse, failed to understand the content of my post. Comic where that jpg is...SVG's of it are everywhere, but jpg a little tougher to find. You failed.
TheJian - Wednesday, September 30, 2020 - link
AMD isn't making enough on them, that is the point. If short on silicon make the highest margin stuff right? It's that simple. Ask Intel.Don't care what they made in 8yrs. I've only owned then since the drop to 43 while still making the same income as a few weeks before at 70. I don't like ANY company; I like the money they make me though. Does that count? I BUY hardware from the winner, period, for what I need done most. I have all 3 in my PC's (Intel, AMD, and Nvidia...ROFL). I was an AMD reseller for 8yrs. I like the company, just hate the management. Lisa Su made 59mil last year, most of any CEO in S&P500 on earth. Her company made 660m, less than all others on the top ten list who make BILLIONS of NET INCOME. Intel made 23.6B NET INCOME, but last year Intel CEO made 66mil (on 23.6B!).
My points are about money here, which you don't seem to have a concept of, so I don't expect you to get the message. Move along. I've never been proud to own a stock (name means nothing). They're just things to make profit on. I didn't own Intel since $26 or so ages ago (~2006-2007, sold never came back until now). That said, if you're talking last 8yrs. Ok.
Sept 2012 $22, today, $50. Not bad money, but if you sold recently you could have had $70. Very good money now for most. But with a price of ~45, I like my chances of $100 by Q1 2022 or before. I'll leave early of course. I was explaining why to buy or not buy stuff you just weren't listening. I gave links to 16 years of data. You can't read? It had nothing to do with being a FAN of company X, it was all about WHY AMD isn't (or IS in NV/Intel cases) making money that they should be based on the price of the stock, share of the market etc. This is stock advice. I don't see you debating any of it either. I see you making a fool of yourself.
But yeah, not a fan of what consoles are doing to AMD's NET INCOME and R&D, they should have passed like NV for the same reason as NV (robs from core R&D and no margin). Any silicon spent on a single digit to mid teens margin product (AMD said it) is WASTED and should be spent on higher end stuff for NET INCOME and REAL margins! See what Intel did. Short on silicon, they moved production to servers and HEDT. Screw celeron/poor people, not the rich. Without the rich to pay the bills you can't afford to support the poor ;) See 1/2 the country that doesn't pay a dime in taxes (is that a fair share?? LOL). I buy stocks so I don't have to care who makes my chip or what price it is, because it is FREE to me via the stock income. NV/AMD bought all my chips for years to come, though Intel will be buying next years probably (or MU as DDR5 etc kicks off and buy cycle with it, many others just do the homework)...ROFL
Teckk - Wednesday, September 30, 2020 - link
If you'r as objective as you claim to be, why do you care about the management of AMD? Why does it matter what the CEO's pay is to you?You buy hardware from a winner? Why not buy the best performant hardware that you need? Again, why does it matter what brand it is then?
You're telling AMD might hit $30 soon or 100? Or is $100 the target for Intel in 2022?
Console margins are 15+% from 2015 onwards. You seem to be pretty good with the links, surprised you missed this one. Definitely not the same as Ryzen, but you don't abandon a market overnight. And if you're the ONLY player you can charge more for it. Zen2 in PS5 and Xbox will not be sold at cost or at a minimal margin too, I'm sure you know.
Intel still makes Celereons and dual core stuff and Wi-Fi cards and more, maybe you should tell them to move that onto higher margin stuff like server processors.
Everett F Sargent - Sunday, September 27, 2020 - link
That was much worse than the Unabomber's manifesto! You need to move out of that yurt way up there in Mongolia.Spunjji - Monday, September 28, 2020 - link
I love the idea that Nvidia isn't in consoles because they said "ROFL. Whatever dude", even though, you know, Nvidia is in the Switch - that most premium of consoles... and they tried selling their own console (the Shield) several times over... and the only reason they aren't doing business with Sony since the PS3 is because AMD could offer the full CPU/GPU package... and the only reason they aren't in the Xbox is because they tried to rip Microsoft off back with the OG Xbox.But sure, it's because they're too cool for consoles (that they're still involved with). Seems legit. 😏
Beaver M. - Tuesday, September 29, 2020 - link
What do you mean *tried* to sell their own console?The Shield is very successful.
TheJian - Wednesday, September 30, 2020 - link
They did that to sell old chips stuck in inventory forever; nintendo probably got a great deal on stuff that at that at point was worth $0 to Nvidia. They didn't say whatever dude, they said no margin and robs from core R&D so we passed. The didn't pursue Nintendo, it's old crap that couldn't be sold to anyone else.NV doesn't do poor stuff until forced, or they simply have nothing else to sell more of, get it? If I've tapped out the entire gpu market, making a mint, etc, then make poor stuff if you still have resources. IE, if NV is short on silicon they put out low models LAST (heck they pretty much always do it, smarter).
What kind of soc is in that premium console from nintendo? Hint, it's not 7nm in that first one.
https://arstechnica.com/gaming/2016/12/nintendo-sw...
Hybrid consoles use "LAST GEN TECH". So they didn't waste tons of R&D did they? :) They are also very small chips even with the new ones (the lite model has newer tech, old T4 was ~100mm^2). Xbox/ps4 were 400+. Those puny ones won't steal much from the 3000's right? They went to samsung, so this deal might have been all they could get out of TSMC at the time (apple was buying all 7nm, now moved to 5nm, amd/intel bought a bunch of the freed up stuff).
Selling your own console and a chip in others for $10-15ea is very different, and also brought their store for game sales (income off other's work). Not the same as 450mm^2 console chips for $10-15 each when those could be flagship cpu/gpus that make $100 or more. NV just ran out of cards in minutes. AMD claiming they won't. It would be a lot easier if you weren't wasting silicon on $15 items right? That size is a large AMD gpu not being sold for $500+. IE, 3070 is ~393mm^2. NV makes more than $15 on them. As the poor guy in semi, you should concentrate on INCOME, not units or share. This has nothing to do with being cool, it wasting R&D. It's about money, so yeah, legit. Your comment? Stupid. Intel screwed the lowend when short 10% silicon (couldn't fill about 10% of customers PC's), and moved to HEDT/server. You don't seem to understand the conversation or how these companies work.
Shield was an attempt again, to move old silicon and only cost 10mil to dev both shield TV and the handheld they said...That is a small price to pay to move old chips worth at least as much and collect some money on the store maybe, push streaming, etc etc. It was a small price to pay and a good move business wise at the time. They failed in mobile so tried to recover some of the wasted silicon collecting dust and push new streams of income while doing it. Good management. These end up in AMD writeoffs (see trinity etc IIRC, multiple apu junk).
They are no xbox/sony because they wanted MORE money. You are proving my point. AMD sold out cheap, NV wouldn't. Yeah, you're right. Thanks. They tried to MAKE MONEY, not RIP off MSFT. I wouldn't work for free either basically. :) 15%, in semi? ROFL. Only if I can't make more on something else ANYWHERE. Note xbox360 cost MSFT 3.5B or so, and Sony's lost ~4B...ROFL. Jury still out on ps4/xbox1 etc. AMD thought they might beat NV tech by being in a console and hoping games would aim at them, fixing their perf problems vs. NV gpus. It's not working. See 9% going to NV over TECH. RT+DLSS sells...OUT that is..In minutes.
Wake me when you actually have a data point and learn to debate. See Paul Graham's chart.
Teckk - Wednesday, September 30, 2020 - link
Yes, all fabs at full steam. Funny you forgot about 10nm and 7nm."but it is only until they right the fab ship and they have many ways to do that." like they've been doing it for last 4 SkyLakes? Or was it 5? You're good at counting you will know that for sure.
Come for a bebate when you actually know something about the process nodes and where they are. Play with historical numbers till then. That's what they are. Historical.
Tilmitt - Sunday, September 27, 2020 - link
Is Anandtech aware that Nvidia Corporation has released a new series of 3D accelerator boards?Qasar - Monday, September 28, 2020 - link
they are, but there are these fires in California.......Sychonut - Monday, September 28, 2020 - link
Excessive politics + power hungry Murthy = delaysdeil - Monday, September 28, 2020 - link
I am not sure how reliable that source is, but I heard Intel could not get double-digit yields on 10nmat first, then after making design less "innovative", but yields were okaying, performance was within margin of error from 14nm+++++++++++++
Spunjji - Monday, September 28, 2020 - link
Entirely reliable.In practice, Ice Lake is roughly comparable to Comet Lake in everything but GPU performance; overall what it gained in IPC it lost in clock speed. This was *after* Intel had already relaxed their 10nm transistor density way below their initial claims of 67-100 million transistors per square millimetre.
They finally seem to have fixed that with Tiger Lake, but given the paper-launch nature of that release and their reluctance to discuss the 8-core variants, I'd be happy to surmise that either yields are still not great or they only have some fraction of their 10nm fab resources capable of manufacturing on the new "SuperFin" node variant.
RedOnlyFan - Saturday, October 3, 2020 - link
@spunjji Your delusional comments deserves a praise.Linustechtips12 - Monday, September 28, 2020 - link
Look honestly since intel didn't have completion why wouldn't you stay roughly on the same node it saves money and time somewhat you can argue that they should've been innovation but are people forgetting they own stock in amd toAgent Smith - Monday, September 28, 2020 - link
They're going to drown in those lakesjjjag - Tuesday, September 29, 2020 - link
Wow the amount of missing the point in the comments has reached epic proportions. AMD will never win. They are a failed company and a failure as a business. They have been around for 50 years and have never amounted to anything more than a side note. They continue to hold on to a failed business model (stealing x86 tech from Intel) with a death grip. As we speak, right now, Apple, Google, Nvidia, and many other companies are developing better, faster, more power efficient mobile CPUs with ARM cores and standard IPs on TSMC 3 and 2nm processes. ARM as a disruption is over. ARM is already wearing the yellow jersey on the road, and we just have a few days left in the race. AMD will be the first to fall, and Intel will be next. Both companies need to make serious strategic changes if they want to exist in 10 years.Everett F Sargent - Tuesday, September 29, 2020 - link
Wow the amount of missing the point in the comments has reached epic proportions. Linux will never win.I don't think anyone is missing the point. Intel is entrenched throughout, have been for decades now.
The only point is to mock Intel's PR naming schemes revisions given their so-called stumbling on the road to 10 mn.
As to ARM we will have to wait and see. RISC anyone? This goes back to at least the 1980's ///
https://en.wikipedia.org/wiki/Reduced_instruction_...
https://en.wikipedia.org/wiki/ARM_architecture
Catalina588 - Wednesday, September 30, 2020 - link
Thank you, thank you, thank you @Ian for the decoding.