It seems odd that they'll be releasing phones with 845 at the same time as windows machines with 835 at the same time(spring 2018). Maybe Asus and HP just got their hardware developed early in the year and MS dragged their feet on WoA. Hopefully Lenovo is launching with 845.
Has HP and Asus officially announced their specs? The only announcements I've seen are from tech review websites. I can't imagine these companies would release their computers with 835 when 845 is only a few months away.
As others said, this will be in the Galaxy S9 in March-April. Also, to comments above: Samsung has a sort of unofficial exclusive deal with Qualcomm via their purchasing agreement, so don't expect the 845 to really pop up in other devices until 2H 2018
Lead times on a new platform are too long. They have probably been working on the laptops at leas since the 835 was first released. For anyone thinking of buying one of these devices it might be a good reason to wait a while though.
Oh no Windows RT over again, Windows 10 Based laptop should base Intel x86 including AMD Ryzen not ARM. Realistically speaking would you play a game on desktop that has a Phone CPU/GPU in it.
Not the same. Windows RT was an ARM os. The new ARM based windows laptops run windows x86 on top of an emulator. You can install x86 apps, only 32 bits apps at first. HP promise LTE and 20 plus hours battery life on the HP Envy X2.
With that The Vastly Absent Helio X35 is going to die.
Most probably Qualcomm Will become the cheap option (look what Xiaomi managed to do with them) , Mediatek will be low end , and hopefully Win 10 will pick up and everything will move on.
Too bad that Mediatek lost this game , and Qualcomm is not innovating anymore but rather using off the shelf ARM cores.
Not to forget also dead Imagination.
I hope Qualcomm will have a better fate . Still it's good they are going forward.
A11 is kind of a generation after. The Snapdragon 835 had the best GPU when it came to market. Also I don't think the A11 will turn out 10% faster than SD845 GPU wise..they should be about equal.
That's a pretty bad argument. There was 6 months from the first SD835 device and the first A11 device. There was however 7 months between the first A10 device and the first SD835 device. So that argument goes both ways.
And the Adreno 540 hardly beats out even the A10. On the other hand, every new Apple SoC beats out the previous Snapdragon device in GPU performance substantially. The SD845 next year won't even beat out the A11 in GPU performance, and still perform 10% less.
Yeah... Look at the die sizes of A11 vs SD835. If Qualcomm had the option to use such a big die size, I am sure it can improve upon a lot of things. Apple fanboys don't realize this.
>"If Qualcomm had the option to use such a big die size, I am sure it can improve upon a lot of things. "
But they don't. That's the whole point. And there's no legitimate reason for why various chip makers for Android are so far behind Apple. The argument for making an SoC for many OEMs is something maybe Qualcomm can use (although it's also lazyness, as Qualcomm don't actually design their own architecture, they license it from ARM). But what about Samsung and their Exynos chips? Huawei and their Kirin? These companies are making chips that are being used specifically by their own flagship, just like Apple. How come they don't have that advantage?
>Apple fanboys don't realize this.
Except I'm not. I hate everything about Apple and can't bear using iOS. But the facts of the matters are clear, and Apple SoC performs way, way better than anything ont he Android platform. The difference is big enough to be embarrasing. Even Intel didn't ever have such an advantage over AMD at any time.
I can only say 'amen' to this. I feel the Android companies have become lazy - they don't even try to compete anymore, at least on cpu performance. And as I'll buy an apple the day the sun goes supernova it means I can only get the second best on the market with a huge gap between the number one.
Your comment about AMD vs Intel was absurd. Just before Ryzen was launched, Intel was very very far ahead.
Regarding Apple vs. Qualcomm, first of all we need to keep in kind that TSMC is the leading foundry today and their advantage over Samsung will only widen in the next 18 months. So based on process manufacturing technology alone Apple has more than 6 months advantage over everyone else. Than we arrive at Android ecosystem. Who in their right mind would develop any app in the next 12 months that would not run on a 835 SoC? Heck, even targetting 835 as a minimum requirement would be crazy. The vast vast vast majority of Android phones out there are not running Qualcomm's flagship SoC. Buying a 845 tomorrow will only mean a somewhat faster phone (if you will be able to notice it) but you won't be able to do anything I can't do with my Moto G5S+ which is really the kind of device that makes sense buying if you like Android.
Since your didn't mention any qualifiers I'll toss this out: an 845 should provide a usable a/vr experience; that's not possible with the g5s+. Also there's 4k60 hdr, 480fps recording@720p, ability to run on device NN at low power (practically speaking, at all), etc. You may not be interested in any of these things but your only said "won't be able to do anything" your phone can't.
Absurd? You said yourself that before Ryzen Intel was very far ahead. That was what I was trying to say. AMD was basically dead in the desktop segment before Ryzen launch, with hardly any market share. And Intel's advantage was what, 60% better ST performance and not even half that in better MT performance? Apple's advantage is far bigger in ST alone, at 120%!
> And there's no legitimate reason for why various chip makers for Android are so far behind Apple
Well actually there is - it seems Apple has managed to build processors that are so powerful that they pull more current from the battery than even a 12-month-old iPhone can reliably manage.
Apple can charge very high prices for its products, so it can afford to pay for large die sizes in its processors. Qualcomm can only sell its Snapdragon 845 for roughly $60, so it has to keep costs down and use smaller die sizes to make a profit. Here is a good article that explains: https://www.androidauthority.com/why-are-apples-ch...
Given the very fast release pace of SoCs these days and the low performance of the first Kryo - designed by Qualcomm core - they probably need a few years to rethink it and bring it back to market. In the meantime, they can just take ARM IP directly since they are good enough on each generation.
Maybe you are rushing a bit into being worried about power. Clocks go up by 14% over SD835 while the process is supposed to provide 10%,not a large difference. A73 was targeted at 2.8GHz, A75 at 3GHz and with SD835 10nm was new, that tends to get messy.
Anyway, SD845 is rather boring and not sure anyone can do anything about making phone SoCs less boring at this point- outside of a 100x faster GPU but ... Ofc in this segment , some competition would be good to push prices down and ,in an ideal world, coupled with 98% lower licensing fees for substantially cheaper flagships.
"But we also have to remember that given that the new CPU cores are likely based on A75's we should be expecting IPC gains of up to 22-34% based on use-cases, bringing the overall expected performance improvement to 25-39%."
You seem to calculate 0.22 *1.14 = 0.24 and 0.34 * 1.14 = 0.39, but it is 1.22 * 1.14 = 1.39 and 1.34 * 1.14 = 1.53, ie. 40-50% performance gain on the use cases at the higher frequency.
Phone SoC's boring?!? As opposed to what, Intel and AMD? Lol. Phone SoC's are EASILY the most high tech products available to consumers today. It just isn't even funny how much more advanced they are than PC tech
Yes they are boring as CPU perf is already more than enough and GPU perf is way too low. Machine learning is slow to be implemented and phones are far from the ideal form factor for it. As for cameras, we need much better sensors for real progress. Qualcomm could keep selling the SD835 for the next 5 years and 95% of the users wouldn't notice. This is a major upgrade and nobody will even notice, all the upgrades offer minimal value to the user. Hell. 99% of the users would be just fine with a SD660 or less.
That still leaves 5% more power consumption on frequency alone, don't forget the ~15% IPC increase also comes at 15% more power consumption. That's 20% in total, not a small number by any means.
The first graph used to show the performance improvement is very misleading. The biggest improvement is in Octane benchmark with a 48% improvement. Yet the chart shows almost 5x improvement. I feel year on year, the improvements from ARM SOC are quite small.
Actually this is a huge performance improvement, far larger than last year. Geekbench 4 gains 50+% between 835 and 845. Given the 835 scores over 2100, the 845 should get over 3200, or 75% of the A11. That's not bad at all!
Yes but can QCOM's 2018 chip beat an Apple chip from 2017? GPU: Yes CPU: No.
I'm an Android user (Pixel XL), but I really dislike seeing posts like this praise Qualcomm when they're WAY WAY behind Apple in terms of single threaded CPU perf.
There is only about 6 months difference between the availability of phones, so calling it 2017 vs 2018 is being misleading on purpose.
I'd expect multithreaded performance to be about equal, and single-threaded to be at 75%. That's not being way behind by any measure.
I have a Galaxy S6 which is fast enough for me, but this will be twice as fast. At this much higher performance level further performance increases will be hard to notice.
Stop making up lies. The A11 has 120% more powerful single core and 50% more powerful multi core than SD835. Even their GPU is 40% more powerful than the Adreno 540.
The SD845 only improves upon performance by 25-30% over the SD835, which means that overall performance will still be noticeably behind the A11. The latter will still be twice as powerful (95-100%) in singlethreaded performance and 20-25% faster in multithreaded performance. With a smaller frequency as well (2.4 GHz on the A11 vs. 2.8 GHz on the SD845).
So no, SD845 doesn't have 75% of A11's single core. It has 50%. Multithreaded will still be behind and be about 85% of A11, not equal.
Despite being much further behind in architecture, ARM’s IPC improvements are actually slowing down faster than Apple's. A75 has improved IPC by 10-15%, whereas the Monsoon cores in the A11 improved IPC by 20%. At this rate, there's no catching up to Apple, and the gap will never close. There’s clearly no timeframe to catching up to Apple. They seem to improve and stagnate in line with the rest of the industry.
Stop spouting bullshit. Did you even read the article??? The speedup on GB4 is clearly shown as 34% (surprise, some benchmarks improve more than others). Then there is 14% increase in frequency which makes for a >50% performance gain. It's trivial to verify current SD835 phones score 2100+ while A11 based ones top at 4300, ie. just about 2x faster.
So yes my 75% ST / 100% MT numbers are accurate. 50% gain in one generation is a huge leap.
You read it, jackass. Qualcomm clearly say themselves that the SD845 in itself performs ~30% better than SD835. Also, even supposing your 50% claim is true, which it clearly isn't, that still wouldn't keep the SD845 within 75% of A11, as you originally claimed. It would still only perform only 60% within A11's performance.
The average perf gain isn't relevant here - some benchmarks will gain far less than 30%, some far more. GB4 is simply one that improves a lot more. The graph clearly shows a 34% gain on GB4 at equal frequency. So the 50% gain is simple arithmetic. I've posted a link showing that SD835 phones reach over 2100 on GB4, so gaining 50% means ~75% of A11's 4300 score.
The graph doesn't show 34% gain on equal frequency, no. The graph says only a 34% increased performance. It doesn't say anything about same frequency or whether it's multithreaded performance, singlethreaded performance or them combined. It just says 34% and nothing more.
Here's a statement by Qualcomm themselves on their own site:
"The new Qualcomm® Kryo™ 385 architecture, built on Arm® Cortex™ technology, will see up to 25 percent performance uplift across gaming, application launch times, and performance intensive applications compared to the previous generation" and
"Kryo 385 CPU - Four performance cores up to 2.8GHz (25 percent performance uplift compared to previous generation"
But I guess Qualcomm are delibaretely lying to make their own products sound worse than they are, huh?
The graphs says "All comparisons at ISO process and frequency", so that means at the same frequency, ie. it's the microarchitectural improvement alone. Maybe you need glasses?
Like you quoted, QC explicitly states the average gain across a wide range of use cases. Most users don't care about the results of specific benchmarks, so quoting an average is quite reasonable. However that's not evidence that every benchmark will exactly get 25%, especially not given we have more detailed results for a few specific benchmarks.
Do you have any evidence that a 50% gain on a benchmark simply isn't possible in one generation? Why are you so upset by a great performance gain? It is as if you simply don't want the gain...
Yes people often point out Geekbench but it's actually worthless when comparing different hardware on different software platforms. The only thing Geekbench is remotely accurate at is comparing hardware evolution's on the same software platforms such as A9 to A10 or SD 835 to SD 845. You only have to dual boot Windows and Linux on a desktop and run Geekbench 4 on both to see how massively different it's results are from OS to OS. But you have a fair number of mindless people who think an iPhone is as powerful as a Macbook Pro because of Geekbench's useless cross-platform results.
Always people compare these Apples to Oranges. Plus people also forget how the SD82x platform broke the A9 to shreds.
This is again a semi-custom chip we are looking at same like the 835 instead of a full custom like Kyro or Krait. Also the A series chips are very expensive thus the higher price and Apple has more R&D expenditure in that regard. Not that Qcomm cannot but bleeding that much cash into that ecosystem will break their bank, given the NXP case, Apple lawsuits across the world, Patent game and the market footing.
Also the Qcomm SD platform has undisputed advantage when it comes to OSS development, No other SoC can compete with them, Exynos Pre SIII used to but Samsung chose different path, Maybe Kirin but it's not transparent or ahead of the league with CAF like Qcomm.
The current version of Geekbench uses big ass chunks of commonly used open source code compiled for each platform and running the exact same workload.
For instance, one of the tests is to use PDF rendering code from the Chrome web browser to render the same PDF on each platform.
It tests things like LLVM compiling source code, SQLight running database tests, the LUA scripting language used in so many games, Chrome rendering HTML and PDF, etc.
Maybe my reading skills are slipping but that doc is extremely high level with no links to their repo or even compiler settings (oh yeah, they use different compilers for different platforms, so, that's not great).
I would agree on this - Geekbench is not a reliable benchmark. I seriously don't believe we have good set of benchmarks - to actually to explain the differences of cpu's especially between x86 and ARM processors.
For one thing there is a big difference between RISC (ARM) and CISC (x86) architectures. By designed CISC has complex instruction - meaning that it instructions can handle more complex instructions on execution. This has even got more complex with instruction extension AVX2 and now AVX 512
RISC on the other hand is Reduce Instruction Set - this means it has simpler instruction, which can be process more efficiently. This means for complex activity it takes more instructions to handle the instruction.
I used to be very heavy into CPU architecture as OS developer and at Georgia Tech, I had class on Micro Code program - but CISC and RISC eventually get coded down into Microcode. I have heard that Intel has change the designed of Microcode to take away the advantage of RISC instruction in the instruction pipeline - by change the way the instruction work on pipeline.
Most interesting things that I heard in this area - is propose technical rumors of 10nm - which is about optimized MOVS instructions - this is extremely frequent code that compilers and also people that hand assembly instructions ( pretty rare now )
There is a book called "80x86 Architexture & Programing" by Rakesh K. Agarwal" 1991 which can pretty much provide pseudo code for each of x86 instructions at the time. If the rumors are correct Qualcomm made some optimizations for x86 - I could see them creating new instructions to make it run fasters - books like this one and others knowledge could help.
Then it proposes a bigger question, does the Qualcomm process become ARM processor with Intel x86 emulation - is it going to be another Intel clone cpu like AMD CPU.
As for as "iPhone is as powerful as a Macbook Pro because of Geekbench's useless cross-platform results.", this could be true in my case of hardware. iPhone 6 and MacBook Pro 13 in 2009 with Core 2 Duo 2.2Ghz. There is no way even iPhone X is anything close to new MacBook Pro in real life.
I would suggest you read some more books and do some actual programming before making many obviously false claims. The Hennessy&Patterson book is biased towards MIPS but OK for beginners learning about RISC and CISC. https://www.amazon.co.uk/Computer-Architecture-Qua...
The fact is Arm and AArch64 typically require fewer instructions than x86. This is simply about good ISA design: adding only instructions which are useful to compilers. It's funny you mention REP MOVSB, that's a perfect example of a complex and slow instruction that is never used by compilers. This has been the case for the last 30 years - even in the latest x86 CPUs, REP MOVSB is something like 3x slower than a hand coded memcpy for typical sizes...
"It's funny you mention REP MOVSB, that's a perfect example of a complex and slow instruction that is never used by compilers"
It is obvious some people have no idea of compiles - at least for C++ a lot of times when moving variables around translated from C++ code to actual assembly code
I compile a simple MFC apps and found tons of Rep MOv instructions
_this$ = ecx push ebp mov ebp, esp sub esp, 204 ; 000000ccH push ebx push esi push edi push ecx lea edi, DWORD PTR [ebp-204] mov ecx, 51 ; 00000033H mov eax, -858993460 ; ccccccccH rep stosd
"This has been the case for the last 30 years - even in the latest x86 CPUs, REP MOVSB is something like 3x slower than a hand coded memcpy for typical sizes..."
do you realize that memcpy actually is c function that actually used REPSV to implement. Don't just take my word take people who discuss internals of Linux kernel.
This books looks like actually a new of version of book I had at Georgia Tech in 1980's - yes things have change - but I have over 30 years in software development including almost seven years in Intel assembly Operating System development - including finding erratum in CPU related to 386 Protected mode code.
"The fact is Arm and AArch64 typically require fewer instructions than x86"
You do realize by nature of CISC by it nature a single CISC takes many RISC instruction to execute.
Sure you can find examples where VC++ emits rep instructions when optimizing for size, but when you turn on optimization compilers won't because it will slow down your code. Look in GLIBC, there are SSE, AVX, AVX2 and AVX512 variants of memcpy and memset.
"You do realize by nature of CISC by it nature a single CISC takes many RISC instruction to execute."
This kind of statement shows you have actually no idea. In reality compilers don't use complex instructions - if you look at typical x64 code you'll see the majority of instructions are simple (even load+op instructions are hardly used). The really complex instructions like ENTER/LEAVE/REP MOV/SIN/COS etc are practically never used. If you did use the complex instructions then it would not be possible to design a CPU that ran them fast enough. So x86 is fast only because CPUs optimize for the simple instructions that compilers use.
Another interesting RISC/CISC fact that most people get wrong too: x86 has very bad codesize due to the complex instruction encoding. AArch64 code for example is smaller than x64 despite using fixed-width 32-bit instructions. Yes, x64 instructions need more than 32 bits on average!
If you are referring to Qualcomm vs Apple, Apple does have unique advantage that there CPU/GPU is for a single OS - we might see that new chips have significant advantage with new versions of OS,
Apple also has more tight control of development of apps where as Microsoft and Android have less control of apps and also work on different CPU / GPU combinations.
Yes, and I believe the goal is to use LLVM for the Windows version too eventually. Compiler differences are small, but there are issues due to using different libraries. But that's nothing new, Windows has always been slower allocating memory, opening or accessing files etc.
Samsung uses 2 different SoCs in its phones and ensures performance (and battery life) is similar, so they run the SD835 version at a lower frequency. Max frequency is 2.35GHz in the S8, but I guess it must be a bit lower when running Geekbench.
Note Kirin 970 gets over 2000 in GB4 at 2.4GHz, so 2100 for a 2.45GHz SD835 is spot on.
If you ignore the memory score, the crypto/int/fp results for the S8 are exactly what you'd expect when running between 2.25 and 2.35GHz. The AES test shows the actual frequency the CPU runs at.
Yeah, the scale of the graphs are definitely misleading. If something isn't even 2x the performance baseline, then the bar shouldn't blow well past the 2x mark. That's not to take away from the actual performance gains, but it's definitely on the misleading side.
Can the DSP also be configured for AV1? The standard should (theoretically) be locked down this year. And Google refuses to pay the HEVC payments for Android, as do (apparently) everyone but the richest handset sellers. Meaning AV1 will, in all likelihood, be the HDR choice for both video and stills next year.
I serious doubt 835 will be affordable - with 1Gb LTE support included and attempting to going against Windows platforms with same chip - it going quite expensive. Qualcomm is using there almost real monopoly on LTE and taking advantage of it.
BTW, my understanding of Qualcomm's approach to AI is to use Hexagon for energy-efficient inferencing. For more performance-intensive tasks, there's the GPU.
It's easy to support it, so that's another checkbox for the literature. I'm not sure if there is a crossover point in their lineup where the soc gpu becomes faster than the hexagon for these loads. Btw, I can't imagine there would ever be a useful model where it makes more sense to run it in the CPU if you have any alternative. It would just be painfully slow.
The CPU has NEON, so if your network is tiny and the batch is small, it could be worth just running it on the CPU rather than stuffing the request in a queue for the DSP and suspending/resuming your process while waiting for the DSP to handle it. All of that is overhead you wouldn't have if you just ran it locally on the CPU.
It's basically the same argument for why the CPU even has a FPU, rather than always shipping everything over to the DSP or GPU.
As would I. Sadly, Qualcomm doesn't support their SoC's for 4 years so you won't be seeing a Snapdragon in a Chromebook anytime soon. What is needed for Chrome OS is Google to "make" (contract to to a third party) a nearly stock ARM A75/A55 SoC with a large die Mali graphics solution pushing around 1.2 terraflops of fp16 performance. Give it a 4.5 watt TDP so the SoC can hold those higher clocks better and most importantly give it 4+ years of driver support. Chromebooks don't need advanced camera ISP's, AI, or gigabit+ LTE modems.
Finally 2160p60 HEVC encoding!!! Let's see if someone will be brave enough to enable it on smartphones... Only remaining questions: Is L5 signal supported like in the BCM47755 ? And what about Sapcorda support?
It seems to be supported in GPU and CPU only: https://semiaccurate.com/2017/12/18/qualcomms-ai-u... "The Hexagon 685 HVX ISA now supports INT8 instructions, the Adreno 630 has FP32 and FP16 as you would expect from a GPU, and the Kryo 385 now does FP32 and INT8."
Finally there is an update to the "LITTLE" core in the "big.LITTLE" configuration. After an extremely long three years of service, the Cortex-A53 is finally being replaced with A55. After this, hopefully the budget Android devices will finally get some kind of an upgrade.
"Qualcomm claims it's able to reduce memory access bandwidth by up to 40-75%, a significant figure." Oh boy, how I have waited for my memory bandwidth to be reduced like that ;) (should be "...reduces power by limiting memory access...")
"...captures multiple pictures in fast succession and applies an algorithm to remove noise reduction in higher quality fashion..." Yeah, I couldn't stand that higher quality noise reduction either ;) (-> "algorithm to remove noise" or "algorithm for noise reduction")
"Most devices I've seen with Snapdragon 835's used about 1.1W per core at peak using the power virus-style workload we traditionally use so ... is quite surprising" If it's 1.1 W in the worst case I don't think an increase of peak performance is all that surprising. Even 1.5 W on all 4 cores should be sustainable in usual flagship phones, as long as the GPU does not add load. If I remember the problems started at over 2 W per core (SD810). And any software which uses less threads, works those threads lighter or just runs for a short time will definitely be fine. IMO QCs choice here is not surprising and beneficial overall.
A little math: 1.5W x 4 = 6W Add the roughly 1W for screen and another for GPU and display pipeline and you're looking at about 8W. A typical flagship battery has around 3700mAh @ either 3.7V or 4.2V for a Wh range of 13.7-15.5 or less than 2 hours of use. Of course, thermal throttling will kick in well before that.
You CAN actually reduce memory access bandwidth by putting a v simple compressor in the memory controller. Adds a few cycles (not many) and can drop the required bandwidth by up to 50%. Centriq does this, so QC clearly has the tech, though being a different group it may not have moved sideways to the mobile group yet.
The next frontier is to compress cache lines in L3, and there's been a fair bit of academic work showing the value of doing this, along with devising appropriately low latency compressors. I expect we will see this within five years. (If ARM or QC do it, we'll hear. If Apple do it, of course we'll hear nothing --- hell they might already be doing it. [I suspect Apple may have introduce memory controller compression with the A11, hence their spectacular memory bandwidth numbers?] It would make sense for IBM; and of course it would make sense for Intel except they seem to have made a company decision five years ago that they were never going to introduce a single damn innovation again after Skylake...)
Anandtech should either fix that misleading "New performance levels" graph or delete it. This is a tech site, right? Shame on Qualcomm for even creating it.
A55 is very attractive, but less L2 cache and lower frequency will kill the performance improvement on the efficient cores. And the inefficient cores just have higher maximum frequency and lower L5, so even less efficient. Kind of disappointment. A55 arrival was supposed to be great.
The low latency private L2 caches actually improve performance. Why else would you think they were added, to slow things down? Also you forget that the little cores previously had just 1MB L2, now it's 0.5MB in private L2 plus 2MB shared L3. When running only little cores (common scenario) that's 2.5 times more cache, or 6.5 times if you include the 3MB system cache.
Only 6.5 times more cache, that's very disappointing indeed!
Private(erotik) L2...:D but half private...Yes yes bit slower L3 and...More of slower L4 cache X+XGB RAM...L5 cache XXXGB nand flash cells Wooo 6555555555555 times more cache. I wish more volume of L2 which is faster than any other cache with large number.
It seems odd to see Qualcomm move away from a custom core. Along with Apple, they were the only other company that has historically rolled their own going way back to the Scorpion cores in 2008.
Rumor has it that they will be back for the 855. Another interesting tidbid is that there are optimizations in the 845 for x86 emulation.
Qualcomm did an admirable job of integration as usual. Despite the AX's SoC CPU speed advantage, they used much more die space to achieve this relative to Qualcomm and have so far never been able to fab a modem on die. Apple's vertical integration and designing out of suppliers will give them a speed win in the short term, but in the longer term they will be slower to adopt improvements from bigger shifts in industry requirements or better parts forged from satisfying the needs of multiple customers.
Given the difference in profit margins between the server and smartphone markets, if they can only afford one high profile design team, I wouldn't expect them to move that team back to smartphone chips at all. They seem to have a credible enough version one of an ARM server chip on their hands.
After we've seen so many nations find against Qualcomm on antitrust grounds for abusing their CDMA patents, and CDMA being in it's sunset years anyway, they are not going to need to plan for a future where their smartphone chip profit margin is greatly reduced.
Recall that CDMA corresponds to both the legacy 2G voice standard but more generally to the code division algorithm used in 3G and 4G, and Qualcomm owns the main patents that enables these. (OFDMA is used in 4G but Qualcomm owns the patents here from its purchase of Flarion.) The voice standard is going away but Qualcomm's patents in the higher speed data standards will be relevant and enforceable for a long time.
There's plenty of distortion of Qualcomm's licensing model from big angry plaintiffs as an undeserved cut of innovation, but due to Qualcomm's voluntary royalty cap and low absolute size, it is better interpreted as a discount for makers of cheaper phones that make less intensive use of IP. Regulators even in the EU agree with this and it's likely the model stays:
Unlike the situation with CDMA, where Qualcomm owned the patents outright with no FRAND restrictions on licensing, LTE patents are all FRAND encumbered. Meaning Qualcomm has no choice but to license them without the sorts of shenanigans that have already landed them in antitrust hot water from so many nations, with more antitrust suits still in process. The US, for instance.
Qualcomm has to plan for a future where they can no longer milk the golden goose that was their CDMA patents. Profit margins in the server market are much higher than those for a smartphone SOC.
Early testing shows that their version one of a server chip is credible enough, but Intel is currently feeling threatened enough that Qualcomm really should keep their design team cranking on server chips.
At the end of a product cycle, OEMs care a whole lot more about margins rather than licensing key technologies, enough to lie to regulators and break voluntarily signed enforceable contracts that had been in effect for years and have untold conferred billions in benefits. The current atmosphere has much more to do w/ lobbying by large OEMs than legal standing or fairness, and indeed there are strong dissenting voices in the FTC and the Taiwanese chamber of commerce over this specific matter.
That EU statement lends regulatory credence to the general model Qualcomm and others employ, and it's not just up to the licensees to determine what's "fair" or "non-discriminatory." In an industry where misreporting is so prevalent, I don't see a problem with erring on the side of getting paid in enforcing contracts by refusing to supplying your version of the IP implementation to non-willing licensees or finding a correct range for your IP using market negotiations that may involve incentives. (You can't seem to win here as more toothless patent non-practicing holders who can only use the nuclear option of barring importation will be called trolls.) There will probably be more fines next year but the current view on this issue neglects extensive precedent and legal standing; I think the model survives along with their margins.
The issue for Qualcomm is that their patent free ride is ending, and with that their outsized profit margins.
Once CDMA is sunset by the carriers, we are no longer going to see the situation where, for instance, Samsung is forced to use Qualcomm's SOC in their US phones instead of simply using their own in house chips.
Qualcomm needs to plan for the future where they will have to get by in a world where the courts will decide what constitutes a reasonable royalty on their FRAND encumbered patents, and we've already seen the courts smack down several of the tech giants in the last ten years for not being the least bit fair or reasonable in their demands.
>According to the ruling, Microsoft will now have to pay Motorola Mobility $0.00555 for each unit it sells incorporating the H.264 standard, and $0.03471 per unit comprising IEEE 802.11 technology. Moto had originally argued for a 2.25% cut of the net selling price of the end product incorporating its SEP-protected technology. A chart from Microsoft, reproduced at AllThingsD, suggests that Moto’s original demands would have amounted to an annual payout from Microsoft of $4 billion. After Judge Robart’s ruling, Moto will be receiving a comparatively paltry $1,797,554 per year.
Again, CDMA the voice standard is getting sunset, not 3G/4G which use in parts the same algorithm but are very different patents. Qualcomm is very much responsible for developing the 3G/4G infrastructure we all use and collects royalties from OEMs whose phones participate in the networks they enable.
The main difference from Motorola's claim was that they only held a couple of interlacing patents in h.264 vestigial to the standard whereas Qualcomm owns the backbone. Comparisons were made to alternatives and other pools of patents using standard Georgia-Pacific criteria for valuing patents and they were deemed to be worth far less than they were attempting to charge. I believe a similar analysis on Qualcomm's patents could well place their value above what they're charging.
Again, unlike the case with Qualcomm's CDMA patents, all of the LTE patents are FRAND encumbered. This was a requirement for any patent to be included in the LTE standard when it was created.
If you refused to agree to license your patents to all takers on a Fair Reasonable, And Non-Discriminatory (FRAND) basis, they used somebody else's IP to create the standard instead of yours.
Once CDMA is sunset and everyone moves to LTE, Qualcomm will no longer be in the position where they have patents that are absolutely necessary if you want a device that works on a network using Qualcomm's CDMA standard (Sprint, Verizon and US Cellular in the United States), with no FRAND restrictions on those patents.
Between recent antitrust findings and court rulings in the past decade on FRAND encumbered patents, along with CDMA use being sunset by the carriers in the not so distant future, the golden goose is slowly waddling away.
Which is why I say that Qualcomm needs a continued investment into a potential higher margin server chip future, because Intel is currently feeling threatened by competition for the first time in forever, and you can be sure they are no longer going to be so complacent.
You still misunderstand, there are NO alternatives used by major carriers: Qualcomm owns the backbone of LTE patents and they will continue to have a substantial presence in 5G standards IP as well. In exchange for FRAND licensing, the industry made many methods that Qualcomm invented standards for the cellular interface with the E&M spectrum in 3G/4G LTE/5G. That FRAND licensing should take into account the economic benefit of standards was recently released as a guideline in the EU and a major point of contention by Apple. Qualcomm keeps its licensing revenue model under this.
Qualcomm is moving from a present where they can require any payment they like for the use of their CDMA cell phone patents with no FRAND restrictions at all (patents that are absolutely necessary for devices to work with a subset of carriers), to a future where all their patents on cell phone connectivity standards will be FRAND encumbered.
Court rulings over the past decade make it very clear that FRAND encumbered patents can't be used to price gouge, despite the best efforts of some very well heeled tech giants to do exactly that.
Qualcomm will still make a nice, reasonable profit on FRAND encumbered patents that will most likely be set either by the courts or through binding arbitration, but they will no longer be able to make an absolute killing the way their CDMA patent portfolio has allowed.
They have been in a highly profitable position for decades after creating a standard used by a subset of carriers in the days before modern standards bodies would only use FRAND encumbered patents, but as I say, that golden goose is slowly waddling away.
I guarantee you that the legacy CDMA voice standard that's being sunset is not what Qualcomm is making most of its licensing revenues upon nor is it what it's exerting its influence through. It is its 3G and later FRAND patents that are most valuable today and under the most contention. It receives the bulk of these FRAND licensing fees for these but it is through the precedent of hundreds of voluntary licensing agreements that the notion that the contracts are fair is determined, not one or two licensees who are really after deals going into 5G.
I guarantee you that the second US carriers sunset CDMA, Samsung will stop using Qualcomm's SOC and modem in the US versions of their phones and use the SOC and modem they produce in-house instead, just like Samsung does in every other nation.
Do you imagine that Samsung uses much more expensive parts in the US for some other reason than Qualcomm's ability to hold those CDMA patents over their head?
We're discussing a different issue now, and I agree Samsung has less incentive to use Qualcomm parts once that happens. SoC manufacturing is generally much lower margin for Qualcomm than its licensing business, so it won't lose much margin on Exynos Galaxy phones off which Samsung still owes Qualcomm a royalty. Its SoCs will still have a major edge in modem speed and integration over the Exynos which use an on package modem that incurs more expense than fabbing it on die.
That royalty will be a tiny fraction of what Qualcomm is currently able to demand, as they will no longer be legally allowed to charge a percentage of the retail cost of the entire device when they only have FRAND encumbered patents to leverage.
Again, this is exactly what Google tried to do to with Microsoft's XBox after Google bought Motorola and the courts smacked Google down.
>According to the ruling, Microsoft will now have to pay Motorola Mobility $0.00555 for each unit it sells incorporating the H.264 standard, and $0.03471 per unit comprising IEEE 802.11 technology. ***Moto had originally argued for a 2.25% cut of the net selling price of the end product*** incorporating its SEP-protected technology. A chart from Microsoft, reproduced at AllThingsD, suggests that Moto’s original demands would have amounted to an annual payout from Microsoft of $4 billion. After Judge Robart’s ruling, Moto will be receiving a comparatively paltry $1,797,554 per year.
Totally untrue. The royalty will continue to be assessed on a discount schedule off of a fixed fee and this is the same for any phones using Qualcomm's 3G/4G LTE patents. Furthermore, Samsung indeed used its own chips for the S6 but returned to Qualcomm for the S7 and S8; it is interested in dual sourcing for large volume units as well as capital commitments to its capex high foundry business.
If you read the details of the Google/Motorola vs Microsoft trial, the issue was that Motorola's patents were vestigial to the standard involved and it was correctly determined that the value added was minimal. Qualcomm's patents are very much seminal and central to 3G and 4G LTE and the value added is substantial as seen from the difference in price in consumer products like iPod vs iPhone or iPad vs iPad LTE.
Qualcomm has been in the enviable position of having total control of patents that are absolutely necessary to produce a cell phone that will function with the subset of carriers who used CDMA, with no legal restrictions on what they could demand for the use of those patents.
Qualcomm have been demanding (and getting) a percentage of the entire device, because there were no FRAND restrictions to stop them.
Those days are ending the moment that CDMA gets sunset by the carriers and Qualcomm is only left with FRAND encumbered patents covering cellular connectivity that they leverage.
No more demanding a percentage of the cost of the entire device.
Additionally, Samsung and Apple both show every sign that they will kick Qualcomm to the curb the moment they have to kiss the ring in exchange for access to those CDMA patents.
Qualcomm needs to be thinking about where it's future high revenue streams are going to come from, because that golden goose is slowly waddling away.
What's not FRAND about that? The standard fee they assess is capped at around $10 for 3G/4G LTE, and if your bill of materials is under $300, they'll give you a discount. (5G licensing is similar, you can check on their website for the details.) It will clearly be an OEM's goal to maximize their own margins and characterize the fee schedule as a undeserved cut into technologies that have nothing to do with Qualcomm when in reality it's more of a discount.
Dunno about you but I live where museums, theaters, and amusement parks give children, seniors and students a discount and it's done in the name of fairness. Income tax schedules are even progressive, not just proportional also in the name of fairness. Qualcomm gives manufacturers of cheaper phones a discount on the technology needed to participate in the networks they enable, and this is fair since it enables new entrants and competition in the cell phone market.
Are you under the impression that it's your job to prop up Qualcomm's stock price? It sure seems that way.
Qualcomm faces major changes in their market position that are going to severely cut into their profit margins in the not so distant future when CDMA is sunset by the carriers and they are left with only FRAND encumbered patents on the basic cellular connectivity every device requires.
That doesn't mean they won't still get a fair and reasonable royalty on their patents that are part of the LTE standard, but the price gouging and it's associated exceptionally high profit margins will end.
Just as Google saw it's demand that Microsoft pay four billion dollars for using FRAND encumbered patents in the XBox reduced to a payout of less than two million dollars by the courts, Qualcomm will no longer be legally allowed to price gouge when it only has FRAND encumbered patents to rely on.
Server chips are a high margin business and Qualcomm's server chip looks pretty good for a first revision, so it behooves Qualcomm to keep up it's investment in that area if they are thinking about the future and Intel waking from it's slumber.
Well it's also not like Steve Jobs put on a turtleneck and said: "...one more thing: we're suing Qualcomm," and I just think far too many people buy the distortive comments by Apple at face value. They made next to no investments into standards and rode licensing past prior top phone makers in just a few years. I also think it's up to the market to determine what a fair rate is rather than a single self interested licensee who has a habit of stealing IP and is not honoring contracts. The patents involved in Qualcomm's dispute are worth far more than those in the Motorola dispute if valued w/ similar methodologies as was employed in the Motorola dispute so I think the outcome for Qualcomm in a similar trial would be quite different. Qualcomm is rather aggressive in monetizing its patents and definitely not a good guy, but I think in recent disputes Apple is behaving far more badly.
All the nations where Qualcomm has already lost antitrust cases disagree.
Currently in the US, the Federal Trade Commission is also going after them.
>The FTC alleges that Qualcomm has used its dominant position as a supplier of certain baseband processors to impose onerous and anticompetitive supply and licensing terms on cell phone manufacturers and to weaken competitors.
>Qualcomm also holds patents that it has declared essential to industry standards that enable cellular connectivity. These standards were adopted by standard-setting organizations for the telecommunications industry, which include Qualcomm and many of its competitors. In exchange for having their patented technologies included in the standards, participants typically commit to license their patents on what are known as fair, reasonable, and non-discriminatory, or “FRAND,” terms.
>When a patent holder that has made a FRAND commitment negotiates a license, ordinarily it is constrained by the fact that if the parties are unable to reach agreement, the patent holder may have to establish reasonable royalties in court.
>According to the complaint, by threatening to disrupt cell phone manufacturers’ supply of baseband processors, Qualcomm obtains elevated royalties and other license terms for its standard-essential patents that manufacturers would otherwise reject. These royalties amount to a tax on the manufacturers’ use of baseband processors manufactured by Qualcomm’s competitors, a tax that excludes these competitors and harms competition. Increased costs imposed by this tax are passed on to consumers, the complaint alleges.
When CDMA tech sunsets and Qualcomm can no longer use it's CDMA patent portfolio as a price gouging weapon, their profit margins are going to plummet. There is nothing Qualcomm can do about it.
You sound convinced it's open and shut, but it's not as monolithic an opinion as you seem to believe it is. Here's a rare written dissent by the acting chairwoman of the FTC in the specific matter of Qualcomm:
Most of these government acts were the result of lobbying specifically by Apple, and the FTC lawsuit was filed on the eve of the Obama administration's departure, followed by a hasty suit by Apple. This was not a coincidence as Trump will be much more friendly to American IP.
The EU specifically lays out guidance for longer term rules that add transparency but demand calculation of royalties based on economic value added:
Although Qualcomm will makes some concessions, its basic model will not be affected. Again, it's not about CDMA but 3G/4G LTE and negotiations going into 5G.
Again, all the nations where Qualcomm has already lost disagree with you.
Qualcomm is no longer going to be able to price gouge for a percentage of the retail price of a device when it only has FRAND encumbered patents to fall back on.
A FRAND obligation isn't one that just takes into consideration the distorted views of large established OEMs, but it should be fair to all parties in an ecosystem. A low fixed fee with a discount for less intensive IP usage to enable competition is very fair despite some OEMs crying foul on licensing that cedes any ground to competitors. You'll note the EU agrees on some of the most important issues with Qualcomm, Ericsson, Nokia and other patent holders:
"The Commission’s latest draft no longer has the phrase “licensing for all”, the sources said, a victory for Qualcomm as it removes the obligation on patent holders to provide patent license to all companies asking for them.
A key sentence in an earlier proposal has also been deleted, people said. The sentence said that right holders could not unilaterally set prices according to the way in which a patent is used."
Again, the courts have already found that FRAND encumbered patents cannot be used to discriminate against your competitors and price gouge for a percentage of the retail price of the entire device.
Qualcomm has already faced a string of losses in antitrust suites and really don't have a leg to stand on going forward the moment the carriers sunset CDMA.
I expect their revenues to drop precipitously from lowered patent royalties on FRAND encumbered patents vs. their current price gouging on CDMA patents, and from the companies like Samsung and Apple who will have no reason to give them any business going forward after all the bad blood Qualcomm has caused.
Courts have found nothing as you claim, and if anything recent EU statements explicitly exclude prior language lobbied for by Apple to prevent patent holders from unilaterally setting prices the EU. The burden of proof for the use of rebates is not as cut and dry as prior regulatory decisions indicate:
Fines based on reasoning like this are open to appeal and closer examination. It's also much more about pricing than the moralistic jargon of Apple would have you believe, so peace to you and I'll be happy to see whatever settlement / court decision they make in this matter have the defacto say in this matter.
I have to say I know far less of the details of this than either of you, but I did see that Apple basically said, "Yes, we'll pay what you ask," then went out and made massive profits and remade their entire company based on devices using Qualcomm technology, and only after that went back and cried that they were charged too much. They had every opportunity to negotiate in good faith when they made their contracts, or raise the issue with the authorities, but they chose not to. Now that they're a giant with unlimited resources for litigation, they go back and say they don't want to pay. I like Apple's technology and products, but I can't respect this practice at all.
I believe that FRAND terms should limit you to charging a license fee commiserate with the function you deliver to the device - in this case communications, and that that fee should be fixed for all licensees.
The problem with Qualcomm licensing is it attempts to charge a percentage of the entire retail price of the device, and extorts handset makes to pay Qualcomm a licensing fee for all the innovation delivered by not only the handset maker, but any material, quality or technological improvement delivered by any other licensor or contributor as well.
There should be a fixed license cost and it should be embedded with the chip, whether it's produced by Qualcomm or some 3rd party - and there it should (by the tenets of patent exhaustion) end. No more "percentage of the retail cost of the device" nonsense which attempts to capitalize on everyone else's work.
To praise Qualcomm for changing extortion level licensing and "give a discount" is like praising a mobster for charging you $1000/day to keep your shop from burning down, and having him give your a discount because you have to live on what he doesn't take. It's simply wrong-headed.
By limiting license fees to the cost embedded in the chip, licensing is kept to a reasonable level since if the fee is excessive, Qualcomm will lose the entire feature phone price level device market as it will be eliminated if licensing fees are too high.
A fixed fee is indeed still functionally a percentage of the whole sale cost of a device, just higher as cost goes down. It's amazing that people buy Apple's distortion of this licensing model hook, line and sinker when Qualcomm under its FRAND obligations needs to be fair to all licensees and not just the self interested overtures of the biggest consumer electronics company in the world. To enable fairer competition, it gives makers of cheaper phones a discount off of a fixed rate.
The license fee enables a network, of which the chip it sells implements only a part. (The specifics of the implementation are a separate fee in the cost of the chip, not the SEP license.) There's patents related to the cell towers as well and substantial IP in the rest of the phone's interfaces with the modem as well. As it owns the patents, Qualcomm should be free to charge what it likes so long as it's FRAND. Indeed, many models you're familiar with offer discounts for the very same service, like theaters, amusement parks for children and seniors as well as software licenses for students.
Always great to see new tech and performance uplifts but I do have to question the Gold/silver usage they & a lot of others are now using in their marketing materials. I know Intel I think might have used it first with their Xeon lineup but it seems now everyone thinks oh that is so cool lets do it as well.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
145 Comments
Back to Article
sharath.naik - Wednesday, December 6, 2017 - link
This would be a better fit for window 10 based laptops.NXTwoThou - Wednesday, December 6, 2017 - link
It seems odd that they'll be releasing phones with 845 at the same time as windows machines with 835 at the same time(spring 2018). Maybe Asus and HP just got their hardware developed early in the year and MS dragged their feet on WoA. Hopefully Lenovo is launching with 845.Shrave - Thursday, December 7, 2017 - link
Has HP and Asus officially announced their specs? The only announcements I've seen are from tech review websites. I can't imagine these companies would release their computers with 835 when 845 is only a few months away.Braincruser - Thursday, December 7, 2017 - link
Phones with this chipset will be available in 6-12 months.Ratman6161 - Thursday, December 7, 2017 - link
I assume this will be the SOC on the flagship phones in the usual crop of spring releases. I would expect a Galaxy S9 with this SOC for example.Santoval - Friday, December 8, 2017 - link
Nope, expect it between March (at the earliest) and May (tops).FullmetalTitan - Friday, December 8, 2017 - link
As others said, this will be in the Galaxy S9 in March-April. Also, to comments above: Samsung has a sort of unofficial exclusive deal with Qualcomm via their purchasing agreement, so don't expect the 845 to really pop up in other devices until 2H 2018Ratman6161 - Thursday, December 7, 2017 - link
Lead times on a new platform are too long. They have probably been working on the laptops at leas since the 835 was first released. For anyone thinking of buying one of these devices it might be a good reason to wait a while though.HStewart - Thursday, December 7, 2017 - link
Oh no Windows RT over again, Windows 10 Based laptop should base Intel x86 including AMD Ryzen not ARM. Realistically speaking would you play a game on desktop that has a Phone CPU/GPU in it.haukionkannel - Thursday, December 7, 2017 - link
Well those Are nor for Gaming. They Are reading email and usin powerpoint...admnor - Monday, December 11, 2017 - link
Like a Nintendo Switch?notkike - Tuesday, January 2, 2018 - link
Not the same. Windows RT was an ARM os. The new ARM based windows laptops run windows x86 on top of an emulator. You can install x86 apps, only 32 bits apps at first. HP promise LTE and 20 plus hours battery life on the HP Envy X2.Nehemoth - Wednesday, December 6, 2017 - link
Will read in some minutes, let's hope there's information about a cache upgrade in this SoCs.RaduR - Wednesday, December 6, 2017 - link
With that The Vastly Absent Helio X35 is going to die.Most probably Qualcomm Will become the cheap option (look what Xiaomi managed to do with them) , Mediatek will be low end , and hopefully Win 10 will pick up and everything will move on.
Too bad that Mediatek lost this game , and Qualcomm is not innovating anymore but rather using off the shelf ARM cores.
Not to forget also dead Imagination.
I hope Qualcomm will have a better fate . Still it's good they are going forward.
Well done !
peevee - Thursday, December 7, 2017 - link
"Most probably Qualcomm Will become the cheap option"No way. All the phones became so expensive almost certainly because Qualcomm demands more money for the chips and wireless licenses.
thatgraph_shameless - Wednesday, December 6, 2017 - link
Front page article with that graph? Shameless...darkich - Wednesday, December 6, 2017 - link
The three frequency/voltage planes sounds like a half assed effort on the CPU side..I suspect they just used a stock Cortex A75 and interconnect.The GPU otoh, 30% performance boost is impressive given the Adreno 540 was already on top of the game.
generalako - Wednesday, December 6, 2017 - link
Adreno 540 wasn't really top of the game, though? Apple GPU in the A11 was 40% faster.darkich - Thursday, December 7, 2017 - link
A11 is kind of a generation after. The Snapdragon 835 had the best GPU when it came to market.Also I don't think the A11 will turn out 10% faster than SD845 GPU wise..they should be about equal.
generalako - Thursday, December 7, 2017 - link
That's a pretty bad argument. There was 6 months from the first SD835 device and the first A11 device. There was however 7 months between the first A10 device and the first SD835 device. So that argument goes both ways.And the Adreno 540 hardly beats out even the A10. On the other hand, every new Apple SoC beats out the previous Snapdragon device in GPU performance substantially. The SD845 next year won't even beat out the A11 in GPU performance, and still perform 10% less.
Jts2 - Thursday, December 7, 2017 - link
Yeah... Look at the die sizes of A11 vs SD835. If Qualcomm had the option to use such a big die size, I am sure it can improve upon a lot of things. Apple fanboys don't realize this.HammerStrike - Thursday, December 7, 2017 - link
Yep, because if there is one thing performance enthusiasts get worked up about it’s to large of a die size. Obviously Apple is just phoning it in.generalako - Thursday, December 7, 2017 - link
>"If Qualcomm had the option to use such a big die size, I am sure it can improve upon a lot of things. "But they don't. That's the whole point. And there's no legitimate reason for why various chip makers for Android are so far behind Apple. The argument for making an SoC for many OEMs is something maybe Qualcomm can use (although it's also lazyness, as Qualcomm don't actually design their own architecture, they license it from ARM). But what about Samsung and their Exynos chips? Huawei and their Kirin? These companies are making chips that are being used specifically by their own flagship, just like Apple. How come they don't have that advantage?
>Apple fanboys don't realize this.
Except I'm not. I hate everything about Apple and can't bear using iOS. But the facts of the matters are clear, and Apple SoC performs way, way better than anything ont he Android platform. The difference is big enough to be embarrasing. Even Intel didn't ever have such an advantage over AMD at any time.
jospoortvliet - Friday, December 8, 2017 - link
I can only say 'amen' to this. I feel the Android companies have become lazy - they don't even try to compete anymore, at least on cpu performance. And as I'll buy an apple the day the sun goes supernova it means I can only get the second best on the market with a huge gap between the number one.mpbello - Saturday, December 9, 2017 - link
Your comment about AMD vs Intel was absurd. Just before Ryzen was launched, Intel was very very far ahead.Regarding Apple vs. Qualcomm, first of all we need to keep in kind that TSMC is the leading foundry today and their advantage over Samsung will only widen in the next 18 months. So based on process manufacturing technology alone Apple has more than 6 months advantage over everyone else. Than we arrive at Android ecosystem. Who in their right mind would develop any app in the next 12 months that would not run on a 835 SoC? Heck, even targetting 835 as a minimum requirement would be crazy. The vast vast vast majority of Android phones out there are not running Qualcomm's flagship SoC.
Buying a 845 tomorrow will only mean a somewhat faster phone (if you will be able to notice it) but you won't be able to do anything I can't do with my Moto G5S+ which is really the kind of device that makes sense buying if you like Android.
tuxRoller - Sunday, December 10, 2017 - link
Since your didn't mention any qualifiers I'll toss this out: an 845 should provide a usable a/vr experience; that's not possible with the g5s+.Also there's 4k60 hdr, 480fps recording@720p, ability to run on device NN at low power (practically speaking, at all), etc.
You may not be interested in any of these things but your only said "won't be able to do anything" your phone can't.
generalako - Monday, December 11, 2017 - link
Absurd? You said yourself that before Ryzen Intel was very far ahead. That was what I was trying to say. AMD was basically dead in the desktop segment before Ryzen launch, with hardly any market share. And Intel's advantage was what, 60% better ST performance and not even half that in better MT performance? Apple's advantage is far bigger in ST alone, at 120%!Paradroid888 - Friday, January 5, 2018 - link
> And there's no legitimate reason for why various chip makers for Android are so far behind AppleWell actually there is - it seems Apple has managed to build processors that are so powerful that they pull more current from the battery than even a 12-month-old iPhone can reliably manage.
amosbatto - Friday, January 19, 2018 - link
Apple can charge very high prices for its products, so it can afford to pay for large die sizes in its processors. Qualcomm can only sell its Snapdragon 845 for roughly $60, so it has to keep costs down and use smaller die sizes to make a profit. Here is a good article that explains: https://www.androidauthority.com/why-are-apples-ch...yeeeeman - Thursday, December 7, 2017 - link
Given the very fast release pace of SoCs these days and the low performance of the first Kryo - designed by Qualcomm core - they probably need a few years to rethink it and bring it back to market. In the meantime, they can just take ARM IP directly since they are good enough on each generation.jjj - Wednesday, December 6, 2017 - link
Maybe you are rushing a bit into being worried about power. Clocks go up by 14% over SD835 while the process is supposed to provide 10%,not a large difference. A73 was targeted at 2.8GHz, A75 at 3GHz and with SD835 10nm was new, that tends to get messy.Anyway, SD845 is rather boring and not sure anyone can do anything about making phone SoCs less boring at this point- outside of a 100x faster GPU but ... Ofc in this segment , some competition would be good to push prices down and ,in an ideal world, coupled with 98% lower licensing fees for substantially cheaper flagships.
Andrei Frumusanu - Wednesday, December 6, 2017 - link
A75 gains performance with a linear increase in power. The process power improvement would need to be 30% to keep the same power envelope.jjj - Wednesday, December 6, 2017 - link
I was assuming same or lower power at same clocks but that was wrong.jjj - Wednesday, December 6, 2017 - link
And welcome back, nice surprise to see you write for AT again.Wilco1 - Thursday, December 7, 2017 - link
Andrei - math failure here:"But we also have to remember that given that the new CPU cores are likely based on A75's we should be expecting IPC gains of up to 22-34% based on use-cases, bringing the overall expected performance improvement to 25-39%."
You seem to calculate 0.22 *1.14 = 0.24 and 0.34 * 1.14 = 0.39, but it is 1.22 * 1.14 = 1.39 and 1.34 * 1.14 = 1.53, ie. 40-50% performance gain on the use cases at the higher frequency.
darkich - Thursday, December 7, 2017 - link
Phone SoC's boring?!?As opposed to what, Intel and AMD? Lol.
Phone SoC's are EASILY the most high tech products available to consumers today.
It just isn't even funny how much more advanced they are than PC tech
jjj - Thursday, December 7, 2017 - link
Yes they are boring as CPU perf is already more than enough and GPU perf is way too low. Machine learning is slow to be implemented and phones are far from the ideal form factor for it. As for cameras, we need much better sensors for real progress.Qualcomm could keep selling the SD835 for the next 5 years and 95% of the users wouldn't notice.
This is a major upgrade and nobody will even notice, all the upgrades offer minimal value to the user. Hell. 99% of the users would be just fine with a SD660 or less.
levizx - Thursday, December 7, 2017 - link
That still leaves 5% more power consumption on frequency alone, don't forget the ~15% IPC increase also comes at 15% more power consumption. That's 20% in total, not a small number by any means.watzupken - Wednesday, December 6, 2017 - link
The first graph used to show the performance improvement is very misleading. The biggest improvement is in Octane benchmark with a 48% improvement. Yet the chart shows almost 5x improvement. I feel year on year, the improvements from ARM SOC are quite small.Wilco1 - Wednesday, December 6, 2017 - link
Actually this is a huge performance improvement, far larger than last year. Geekbench 4 gains 50+% between 835 and 845. Given the 835 scores over 2100, the 845 should get over 3200, or 75% of the A11. That's not bad at all!syxbit - Wednesday, December 6, 2017 - link
Yes but can QCOM's 2018 chip beat an Apple chip from 2017?GPU: Yes
CPU: No.
I'm an Android user (Pixel XL), but I really dislike seeing posts like this praise Qualcomm when they're WAY WAY behind Apple in terms of single threaded CPU perf.
Wilco1 - Wednesday, December 6, 2017 - link
There is only about 6 months difference between the availability of phones, so calling it 2017 vs 2018 is being misleading on purpose.I'd expect multithreaded performance to be about equal, and single-threaded to be at 75%. That's not being way behind by any measure.
I have a Galaxy S6 which is fast enough for me, but this will be twice as fast. At this much higher performance level further performance increases will be hard to notice.
generalako - Wednesday, December 6, 2017 - link
Stop making up lies. The A11 has 120% more powerful single core and 50% more powerful multi core than SD835. Even their GPU is 40% more powerful than the Adreno 540.The SD845 only improves upon performance by 25-30% over the SD835, which means that overall performance will still be noticeably behind the A11. The latter will still be twice as powerful (95-100%) in singlethreaded performance and 20-25% faster in multithreaded performance. With a smaller frequency as well (2.4 GHz on the A11 vs. 2.8 GHz on the SD845).
So no, SD845 doesn't have 75% of A11's single core. It has 50%. Multithreaded will still be behind and be about 85% of A11, not equal.
Despite being much further behind in architecture, ARM’s IPC improvements are actually slowing down faster than Apple's. A75 has improved IPC by 10-15%, whereas the Monsoon cores in the A11 improved IPC by 20%. At this rate, there's no catching up to Apple, and the gap will never close. There’s clearly no timeframe to catching up to Apple. They seem to improve and stagnate in line with the rest of the industry.
Wilco1 - Wednesday, December 6, 2017 - link
Stop spouting bullshit. Did you even read the article??? The speedup on GB4 is clearly shown as 34% (surprise, some benchmarks improve more than others). Then there is 14% increase in frequency which makes for a >50% performance gain. It's trivial to verify current SD835 phones score 2100+ while A11 based ones top at 4300, ie. just about 2x faster.So yes my 75% ST / 100% MT numbers are accurate. 50% gain in one generation is a huge leap.
extide - Thursday, December 7, 2017 - link
It's 34% increase in GB total. That includes both the architectural and frequency gains.Wilco1 - Thursday, December 7, 2017 - link
Read it: "All comparisons at ISO process and frequency".generalako - Thursday, December 7, 2017 - link
You read it, jackass. Qualcomm clearly say themselves that the SD845 in itself performs ~30% better than SD835. Also, even supposing your 50% claim is true, which it clearly isn't, that still wouldn't keep the SD845 within 75% of A11, as you originally claimed. It would still only perform only 60% within A11's performance.So no, your 75% ST claim is not true AT ALL.
Wilco1 - Thursday, December 7, 2017 - link
The average perf gain isn't relevant here - some benchmarks will gain far less than 30%, some far more. GB4 is simply one that improves a lot more. The graph clearly shows a 34% gain on GB4 at equal frequency. So the 50% gain is simple arithmetic. I've posted a link showing that SD835 phones reach over 2100 on GB4, so gaining 50% means ~75% of A11's 4300 score.generalako - Friday, December 8, 2017 - link
The graph doesn't show 34% gain on equal frequency, no. The graph says only a 34% increased performance. It doesn't say anything about same frequency or whether it's multithreaded performance, singlethreaded performance or them combined. It just says 34% and nothing more.Here's a statement by Qualcomm themselves on their own site:
"The new Qualcomm® Kryo™ 385 architecture, built on Arm® Cortex™ technology, will see up to 25 percent performance uplift across gaming, application launch times, and performance intensive applications compared to the previous generation"
and
"Kryo 385 CPU - Four performance cores up to 2.8GHz (25 percent performance uplift compared to previous generation"
But I guess Qualcomm are delibaretely lying to make their own products sound worse than they are, huh?
Wilco1 - Friday, December 8, 2017 - link
The graphs says "All comparisons at ISO process and frequency", so that means at the same frequency, ie. it's the microarchitectural improvement alone. Maybe you need glasses?Like you quoted, QC explicitly states the average gain across a wide range of use cases. Most users don't care about the results of specific benchmarks, so quoting an average is quite reasonable. However that's not evidence that every benchmark will exactly get 25%, especially not given we have more detailed results for a few specific benchmarks.
Do you have any evidence that a 50% gain on a benchmark simply isn't possible in one generation? Why are you so upset by a great performance gain? It is as if you simply don't want the gain...
Arbie - Thursday, December 7, 2017 - link
@generalako - Where do you get off being abusive? Go start flame wars somewhere else. This was a decent read until I hit your post. No call for it.jwcalla - Wednesday, December 6, 2017 - link
You'd have to run Android on both the Apple chip and Qualcomm chip to get an accurate benchmark.Compiler differences alone can explain benchmark differences.
SquarePeg - Wednesday, December 6, 2017 - link
Yes people often point out Geekbench but it's actually worthless when comparing different hardware on different software platforms. The only thing Geekbench is remotely accurate at is comparing hardware evolution's on the same software platforms such as A9 to A10 or SD 835 to SD 845. You only have to dual boot Windows and Linux on a desktop and run Geekbench 4 on both to see how massively different it's results are from OS to OS. But you have a fair number of mindless people who think an iPhone is as powerful as a Macbook Pro because of Geekbench's useless cross-platform results.Quantumz0d - Wednesday, December 6, 2017 - link
Second this.Always people compare these Apples to Oranges. Plus people also forget how the SD82x platform broke the A9 to shreds.
This is again a semi-custom chip we are looking at same like the 835 instead of a full custom like Kyro or Krait. Also the A series chips are very expensive thus the higher price and Apple has more R&D expenditure in that regard. Not that Qcomm cannot but bleeding that much cash into that ecosystem will break their bank, given the NXP case, Apple lawsuits across the world, Patent game and the market footing.
Also the Qcomm SD platform has undisputed advantage when it comes to OSS development, No other SoC can compete with them, Exynos Pre SIII used to but Samsung chose different path, Maybe Kirin but it's not transparent or ahead of the league with CAF like Qcomm.
BillBear - Thursday, December 7, 2017 - link
Ummm, no.The current version of Geekbench uses big ass chunks of commonly used open source code compiled for each platform and running the exact same workload.
For instance, one of the tests is to use PDF rendering code from the Chrome web browser to render the same PDF on each platform.
It tests things like LLVM compiling source code, SQLight running database tests, the LUA scripting language used in so many games, Chrome rendering HTML and PDF, etc.
https://www.geekbench.com/doc/geekbench4-cpu-workl...
tuxRoller - Thursday, December 7, 2017 - link
Maybe my reading skills are slipping but that doc is extremely high level with no links to their repo or even compiler settings (oh yeah, they use different compilers for different platforms, so, that's not great).HStewart - Thursday, December 7, 2017 - link
I would agree on this - Geekbench is not a reliable benchmark. I seriously don't believe we have good set of benchmarks - to actually to explain the differences of cpu's especially between x86 and ARM processors.For one thing there is a big difference between RISC (ARM) and CISC (x86) architectures. By designed CISC has complex instruction - meaning that it instructions can handle more complex instructions on execution. This has even got more complex with instruction extension AVX2 and now AVX 512
RISC on the other hand is Reduce Instruction Set - this means it has simpler instruction, which can be process more efficiently. This means for complex activity it takes more instructions to handle the instruction.
I used to be very heavy into CPU architecture as OS developer and at Georgia Tech, I had class on Micro Code program - but CISC and RISC eventually get coded down into Microcode. I have heard that Intel has change the designed of Microcode to take away the advantage of RISC instruction in the instruction pipeline - by change the way the instruction work on pipeline.
Most interesting things that I heard in this area - is propose technical rumors of 10nm - which is about optimized MOVS instructions - this is extremely frequent code that compilers and also people that hand assembly instructions ( pretty rare now )
There is a book called "80x86 Architexture & Programing" by Rakesh K. Agarwal" 1991 which can pretty much provide pseudo code for each of x86 instructions at the time. If the rumors are correct Qualcomm made some optimizations for x86 - I could see them creating new instructions to make it run fasters - books like this one and others knowledge could help.
Then it proposes a bigger question, does the Qualcomm process become ARM processor with Intel x86 emulation - is it going to be another Intel clone cpu like AMD CPU.
As for as "iPhone is as powerful as a Macbook Pro because of Geekbench's useless cross-platform results.", this could be true in my case of hardware. iPhone 6 and MacBook Pro 13 in 2009 with Core 2 Duo 2.2Ghz. There is no way even iPhone X is anything close to new MacBook Pro in real life.
Wilco1 - Thursday, December 7, 2017 - link
I would suggest you read some more books and do some actual programming before making many obviously false claims. The Hennessy&Patterson book is biased towards MIPS but OK for beginners learning about RISC and CISC. https://www.amazon.co.uk/Computer-Architecture-Qua...The fact is Arm and AArch64 typically require fewer instructions than x86. This is simply about good ISA design: adding only instructions which are useful to compilers. It's funny you mention REP MOVSB, that's a perfect example of a complex and slow instruction that is never used by compilers. This has been the case for the last 30 years - even in the latest x86 CPUs, REP MOVSB is something like 3x slower than a hand coded memcpy for typical sizes...
HStewart - Thursday, December 7, 2017 - link
"It's funny you mention REP MOVSB, that's a perfect example of a complex and slow instruction that is never used by compilers"It is obvious some people have no idea of compiles - at least for C++ a lot of times when moving variables around translated from C++ code to actual assembly code
I compile a simple MFC apps and found tons of Rep MOv instructions
_this$ = ecx
push ebp
mov ebp, esp
sub esp, 204 ; 000000ccH
push ebx
push esi
push edi
push ecx
lea edi, DWORD PTR [ebp-204]
mov ecx, 51 ; 00000033H
mov eax, -858993460 ; ccccccccH
rep stosd
"This has been the case for the last 30 years - even in the latest x86 CPUs, REP MOVSB is something like 3x slower than a hand coded memcpy for typical sizes..."
do you realize that memcpy actually is c function that actually used REPSV to implement. Don't just take my word take people who discuss internals of Linux kernel.
https://stackoverflow.com/questions/27705053/memcp...
"The Hennessy&Patterson book is biased towards MIPS but OK for beginners learning about RISC and CISC. https://www.amazon.co.uk/Computer-Architecture-Qua...
This books looks like actually a new of version of book I had at Georgia Tech in 1980's - yes things have change - but I have over 30 years in software development including almost seven years in Intel assembly Operating System development - including finding erratum in CPU related to 386 Protected mode code.
"The fact is Arm and AArch64 typically require fewer instructions than x86"
You do realize by nature of CISC by it nature a single CISC takes many RISC instruction to execute.
HStewart - Thursday, December 7, 2017 - link
Actual my example above has rep stosb - but it very similar to rep mov - which as mention above is all C "memcpy" command is implemented.Sorry about this - but unless I have not found this site does allow edits.
Wilco1 - Friday, December 8, 2017 - link
Sure you can find examples where VC++ emits rep instructions when optimizing for size, but when you turn on optimization compilers won't because it will slow down your code. Look in GLIBC, there are SSE, AVX, AVX2 and AVX512 variants of memcpy and memset.Wilco1 - Friday, December 8, 2017 - link
"You do realize by nature of CISC by it nature a single CISC takes many RISC instruction to execute."This kind of statement shows you have actually no idea. In reality compilers don't use complex instructions - if you look at typical x64 code you'll see the majority of instructions are simple (even load+op instructions are hardly used). The really complex instructions like ENTER/LEAVE/REP MOV/SIN/COS etc are practically never used. If you did use the complex instructions then it would not be possible to design a CPU that ran them fast enough. So x86 is fast only because CPUs optimize for the simple instructions that compilers use.
Another interesting RISC/CISC fact that most people get wrong too: x86 has very bad codesize due to the complex instruction encoding. AArch64 code for example is smaller than x64 despite using fixed-width 32-bit instructions. Yes, x64 instructions need more than 32 bits on average!
name99 - Thursday, December 7, 2017 - link
They both use LLVM, no?So why do you expect a massive difference on the compiler side?
HStewart - Friday, December 8, 2017 - link
If you are referring to Qualcomm vs Apple, Apple does have unique advantage that there CPU/GPU is for a single OS - we might see that new chips have significant advantage with new versions of OS,Apple also has more tight control of development of apps where as Microsoft and Android have less control of apps and also work on different CPU / GPU combinations.
Wilco1 - Friday, December 8, 2017 - link
Yes, and I believe the goal is to use LLVM for the Windows version too eventually. Compiler differences are small, but there are issues due to using different libraries. But that's nothing new, Windows has always been slower allocating memory, opening or accessing files etc.juicytuna - Thursday, December 7, 2017 - link
Where you getting 2100 from? 835 typically scores around 1800. I would guess the 845 will end up around the ~2600 mark.Wilco1 - Thursday, December 7, 2017 - link
Typical scores are around 2100: https://browser.geekbench.com/v4/cpu/search?page=1... (press page 9 to verify that many phones score the same result)juicytuna - Thursday, December 7, 2017 - link
Ok, I was looing at galaxy s8 / note 8 scores. I wonder why the chinese phones score so much higher than the Samusungs?tuxRoller - Thursday, December 7, 2017 - link
Optimization or cheating.Wilco1 - Thursday, December 7, 2017 - link
Samsung uses 2 different SoCs in its phones and ensures performance (and battery life) is similar, so they run the SD835 version at a lower frequency. Max frequency is 2.35GHz in the S8, but I guess it must be a bit lower when running Geekbench.Note Kirin 970 gets over 2000 in GB4 at 2.4GHz, so 2100 for a 2.45GHz SD835 is spot on.
Andrei Frumusanu - Saturday, December 9, 2017 - link
GeekBench changed the memory latency subtest in GB 4.1 and it gives a lower score than GB 4.0. All ~2100ish numbers are GB 4.0.Wilco1 - Saturday, December 9, 2017 - link
That may cause some of the difference, but not nearly all. Consider eg. https://browser.geekbench.com/v4/cpu/compare/52738...If you ignore the memory score, the crypto/int/fp results for the S8 are exactly what you'd expect when running between 2.25 and 2.35GHz. The AES test shows the actual frequency the CPU runs at.
peevee - Thursday, December 7, 2017 - link
Where do you see the test result?MonkeyPaw - Wednesday, December 6, 2017 - link
Yeah, the scale of the graphs are definitely misleading. If something isn't even 2x the performance baseline, then the bar shouldn't blow well past the 2x mark. That's not to take away from the actual performance gains, but it's definitely on the misleading side.Frenetic Pony - Wednesday, December 6, 2017 - link
Can the DSP also be configured for AV1? The standard should (theoretically) be locked down this year. And Google refuses to pay the HEVC payments for Android, as do (apparently) everyone but the richest handset sellers. Meaning AV1 will, in all likelihood, be the HDR choice for both video and stills next year.Andrei Frumusanu - Wednesday, December 6, 2017 - link
Video encoding happens on the Venus fixed function blocks so it's unlikely it'll ever get any other codec support.mode_13h - Wednesday, December 6, 2017 - link
Hopefully this will make 835-based devices more affordable. I'm not about to spend close to $1k on a smartphone.Also, I thought we'd be seeing HBM2 in mobile, by now. I guess it's probably still too expensive.
extide - Thursday, December 7, 2017 - link
Doubt we will see HBM in mobile ever. It's way overkill, takes up too much board space, too much power, etc.mode_13h - Saturday, December 9, 2017 - link
HBM2 is *more* power-efficient and denser than DDR4. This is exactly why it's only a matter of time.MrSpadge - Thursday, December 7, 2017 - link
They don't even generally use 128 bit DDR4, so even a single HBM stack would be total overkill.mode_13h - Saturday, December 9, 2017 - link
Why do you assume that's due to it being unnecessary vs. being to space/power-intensive?Graphics & AI both want lots of bandwidth, and that's where HBM2 is king.
HStewart - Thursday, December 7, 2017 - link
I serious doubt 835 will be affordable - with 1Gb LTE support included and attempting to going against Windows platforms with same chip - it going quite expensive. Qualcomm is using there almost real monopoly on LTE and taking advantage of it.mode_13h - Wednesday, December 6, 2017 - link
BTW, my understanding of Qualcomm's approach to AI is to use Hexagon for energy-efficient inferencing. For more performance-intensive tasks, there's the GPU.tuxRoller - Thursday, December 7, 2017 - link
https://www.qualcomm.com/news/onq/2017/01/09/tenso...Hexagon 682 is roughly twice as fast AND three times as efficient as the adreno.
mode_13h - Saturday, December 9, 2017 - link
Well, that's progress. Then, why even support the GPU? ...I guess there's the matter of legacy devices, with slower/smaller DSPs.CPU support could make sense for very tiny models and small batch sizes, where it might not be worth spinning up the DSP.
tuxRoller - Saturday, December 9, 2017 - link
It's easy to support it, so that's another checkbox for the literature. I'm not sure if there is a crossover point in their lineup where the soc gpu becomes faster than the hexagon for these loads.Btw, I can't imagine there would ever be a useful model where it makes more sense to run it in the CPU if you have any alternative. It would just be painfully slow.
mode_13h - Thursday, December 14, 2017 - link
The CPU has NEON, so if your network is tiny and the batch is small, it could be worth just running it on the CPU rather than stuffing the request in a queue for the DSP and suspending/resuming your process while waiting for the DSP to handle it. All of that is overhead you wouldn't have if you just ran it locally on the CPU.It's basically the same argument for why the CPU even has a FPU, rather than always shipping everything over to the DSP or GPU.
Danvelopment - Wednesday, December 6, 2017 - link
16% gain appears to now mean 160% gain.That is quite easily one of the worst graphs I've ever seen.
matfra - Wednesday, December 6, 2017 - link
Seriously how can they get away with such a misleading performance chart...Notmyusualid - Thursday, December 7, 2017 - link
I too, am surprised Anand printed it.WJMazepas - Wednesday, December 6, 2017 - link
I would prefer a chromebook with this SoC than an Intel SoC. Better eficiency and it would run better Android applicationsSquarePeg - Thursday, December 7, 2017 - link
As would I. Sadly, Qualcomm doesn't support their SoC's for 4 years so you won't be seeing a Snapdragon in a Chromebook anytime soon. What is needed for Chrome OS is Google to "make" (contract to to a third party) a nearly stock ARM A75/A55 SoC with a large die Mali graphics solution pushing around 1.2 terraflops of fp16 performance. Give it a 4.5 watt TDP so the SoC can hold those higher clocks better and most importantly give it 4+ years of driver support. Chromebooks don't need advanced camera ISP's, AI, or gigabit+ LTE modems.SydneyBlue120d - Thursday, December 7, 2017 - link
Finally 2160p60 HEVC encoding!!! Let's see if someone will be brave enough to enable it on smartphones...Only remaining questions:
Is L5 signal supported like in the BCM47755 ?
And what about Sapcorda support?
extide - Thursday, December 7, 2017 - link
Good question, I'd like to know this one as well.Pork@III - Thursday, December 7, 2017 - link
845 is overclocked L2 cache/2 castrate.Lodix - Thursday, December 7, 2017 - link
Did they mention anything about the "up to 2'8GHz" just being for a single core or all cores max frequency?colinisation - Thursday, December 7, 2017 - link
Hi Andrei,Do you know if the new hexagon supports FP in HVX?
Thanks
Andrei Frumusanu - Thursday, December 7, 2017 - link
It does not.SydneyBlue120d - Monday, December 18, 2017 - link
It seems to be supported in GPU and CPU only:https://semiaccurate.com/2017/12/18/qualcomms-ai-u...
"The Hexagon 685 HVX ISA now supports INT8 instructions, the Adreno 630 has FP32 and FP16 as you would expect from a GPU, and the Kryo 385 now does FP32 and INT8."
UtilityMax - Thursday, December 7, 2017 - link
Finally there is an update to the "LITTLE" core in the "big.LITTLE" configuration. After an extremely long three years of service, the Cortex-A53 is finally being replaced with A55. After this, hopefully the budget Android devices will finally get some kind of an upgrade.MrSpadge - Thursday, December 7, 2017 - link
"Qualcomm claims it's able to reduce memory access bandwidth by up to 40-75%, a significant figure."Oh boy, how I have waited for my memory bandwidth to be reduced like that ;)
(should be "...reduces power by limiting memory access...")
"...captures multiple pictures in fast succession and applies an algorithm to remove noise reduction in higher quality fashion..."
Yeah, I couldn't stand that higher quality noise reduction either ;)
(-> "algorithm to remove noise" or "algorithm for noise reduction")
"Most devices I've seen with Snapdragon 835's used about 1.1W per core at peak using the power virus-style workload we traditionally use so ... is quite surprising"
If it's 1.1 W in the worst case I don't think an increase of peak performance is all that surprising. Even 1.5 W on all 4 cores should be sustainable in usual flagship phones, as long as the GPU does not add load. If I remember the problems started at over 2 W per core (SD810). And any software which uses less threads, works those threads lighter or just runs for a short time will definitely be fine. IMO QCs choice here is not surprising and beneficial overall.
tuxRoller - Thursday, December 7, 2017 - link
A little math:1.5W x 4 = 6W
Add the roughly 1W for screen and another for GPU and display pipeline and you're looking at about 8W.
A typical flagship battery has around 3700mAh @ either 3.7V or 4.2V for a Wh range of 13.7-15.5 or less than 2 hours of use.
Of course, thermal throttling will kick in well before that.
name99 - Thursday, December 7, 2017 - link
You CAN actually reduce memory access bandwidth by putting a v simple compressor in the memory controller. Adds a few cycles (not many) and can drop the required bandwidth by up to 50%.Centriq does this, so QC clearly has the tech, though being a different group it may not have moved sideways to the mobile group yet.
The next frontier is to compress cache lines in L3, and there's been a fair bit of academic work showing the value of doing this, along with devising appropriately low latency compressors. I expect we will see this within five years.
(If ARM or QC do it, we'll hear. If Apple do it, of course we'll hear nothing --- hell they might already be doing it. [I suspect Apple may have introduce memory controller compression with the A11, hence their spectacular memory bandwidth numbers?]
It would make sense for IBM; and of course it would make sense for Intel except they seem to have made a company decision five years ago that they were never going to introduce a single damn innovation again after Skylake...)
Arbie - Thursday, December 7, 2017 - link
Anandtech should either fix that misleading "New performance levels" graph or delete it. This is a tech site, right? Shame on Qualcomm for even creating it.peevee - Thursday, December 7, 2017 - link
A55 is very attractive, but less L2 cache and lower frequency will kill the performance improvement on the efficient cores. And the inefficient cores just have higher maximum frequency and lower L5, so even less efficient.Kind of disappointment. A55 arrival was supposed to be great.
Wilco1 - Thursday, December 7, 2017 - link
The low latency private L2 caches actually improve performance. Why else would you think they were added, to slow things down? Also you forget that the little cores previously had just 1MB L2, now it's 0.5MB in private L2 plus 2MB shared L3. When running only little cores (common scenario) that's 2.5 times more cache, or 6.5 times if you include the 3MB system cache.Only 6.5 times more cache, that's very disappointing indeed!
Pork@III - Thursday, December 7, 2017 - link
Private(erotik) L2...:D but half private...Yes yes bit slower L3 and...More of slower L4 cache X+XGB RAM...L5 cache XXXGB nand flash cells Wooo 6555555555555 times more cache. I wish more volume of L2 which is faster than any other cache with large number.peevee - Thursday, December 7, 2017 - link
"the new CPU cores are likely based on A75's we should be expecting IPC gains of up to 22-34% based on use-cases"This is absolutely baseless. The increases promised were not limited to IPC but to the combination of IPC and higher frequencies.
Andrei Frumusanu - Saturday, December 9, 2017 - link
Those are ARM's official numbers on A75 IPC gains.versesuvius - Thursday, December 7, 2017 - link
Much Ado about Nothing!BillBear - Thursday, December 7, 2017 - link
It seems odd to see Qualcomm move away from a custom core. Along with Apple, they were the only other company that has historically rolled their own going way back to the Scorpion cores in 2008.Raqia - Thursday, December 7, 2017 - link
According to thereg, this is due to the CPU team at Qualcomm being repurposed for servers in the past 3 years:https://www.theregister.co.uk/2017/08/20/qualcomm_...
https://www.theregister.co.uk/2017/12/07/qualcomm_...
Rumor has it that they will be back for the 855. Another interesting tidbid is that there are optimizations in the 845 for x86 emulation.
Qualcomm did an admirable job of integration as usual. Despite the AX's SoC CPU speed advantage, they used much more die space to achieve this relative to Qualcomm and have so far never been able to fab a modem on die. Apple's vertical integration and designing out of suppliers will give them a speed win in the short term, but in the longer term they will be slower to adopt improvements from bigger shifts in industry requirements or better parts forged from satisfying the needs of multiple customers.
BillBear - Thursday, December 7, 2017 - link
Given the difference in profit margins between the server and smartphone markets, if they can only afford one high profile design team, I wouldn't expect them to move that team back to smartphone chips at all. They seem to have a credible enough version one of an ARM server chip on their hands.After we've seen so many nations find against Qualcomm on antitrust grounds for abusing their CDMA patents, and CDMA being in it's sunset years anyway, they are not going to need to plan for a future where their smartphone chip profit margin is greatly reduced.
Raqia - Thursday, December 7, 2017 - link
Recall that CDMA corresponds to both the legacy 2G voice standard but more generally to the code division algorithm used in 3G and 4G, and Qualcomm owns the main patents that enables these. (OFDMA is used in 4G but Qualcomm owns the patents here from its purchase of Flarion.) The voice standard is going away but Qualcomm's patents in the higher speed data standards will be relevant and enforceable for a long time.There's plenty of distortion of Qualcomm's licensing model from big angry plaintiffs as an undeserved cut of innovation, but due to Qualcomm's voluntary royalty cap and low absolute size, it is better interpreted as a discount for makers of cheaper phones that make less intensive use of IP. Regulators even in the EU agree with this and it's likely the model stays:
https://www.reuters.com/article/us-eu-patents/eu-p...
BillBear - Thursday, December 7, 2017 - link
Unlike the situation with CDMA, where Qualcomm owned the patents outright with no FRAND restrictions on licensing, LTE patents are all FRAND encumbered. Meaning Qualcomm has no choice but to license them without the sorts of shenanigans that have already landed them in antitrust hot water from so many nations, with more antitrust suits still in process. The US, for instance.Qualcomm has to plan for a future where they can no longer milk the golden goose that was their CDMA patents. Profit margins in the server market are much higher than those for a smartphone SOC.
Early testing shows that their version one of a server chip is credible enough, but Intel is currently feeling threatened enough that Qualcomm really should keep their design team cranking on server chips.
Raqia - Thursday, December 7, 2017 - link
At the end of a product cycle, OEMs care a whole lot more about margins rather than licensing key technologies, enough to lie to regulators and break voluntarily signed enforceable contracts that had been in effect for years and have untold conferred billions in benefits. The current atmosphere has much more to do w/ lobbying by large OEMs than legal standing or fairness, and indeed there are strong dissenting voices in the FTC and the Taiwanese chamber of commerce over this specific matter.That EU statement lends regulatory credence to the general model Qualcomm and others employ, and it's not just up to the licensees to determine what's "fair" or "non-discriminatory." In an industry where misreporting is so prevalent, I don't see a problem with erring on the side of getting paid in enforcing contracts by refusing to supplying your version of the IP implementation to non-willing licensees or finding a correct range for your IP using market negotiations that may involve incentives. (You can't seem to win here as more toothless patent non-practicing holders who can only use the nuclear option of barring importation will be called trolls.) There will probably be more fines next year but the current view on this issue neglects extensive precedent and legal standing; I think the model survives along with their margins.
BillBear - Thursday, December 7, 2017 - link
The issue for Qualcomm is that their patent free ride is ending, and with that their outsized profit margins.Once CDMA is sunset by the carriers, we are no longer going to see the situation where, for instance, Samsung is forced to use Qualcomm's SOC in their US phones instead of simply using their own in house chips.
Qualcomm needs to plan for the future where they will have to get by in a world where the courts will decide what constitutes a reasonable royalty on their FRAND encumbered patents, and we've already seen the courts smack down several of the tech giants in the last ten years for not being the least bit fair or reasonable in their demands.
>According to the ruling, Microsoft will now have to pay Motorola Mobility $0.00555 for each unit it sells incorporating the H.264 standard, and $0.03471 per unit comprising IEEE 802.11 technology. Moto had originally argued for a 2.25% cut of the net selling price of the end product incorporating its SEP-protected technology. A chart from Microsoft, reproduced at AllThingsD, suggests that Moto’s original demands would have amounted to an annual payout from Microsoft of $4 billion. After Judge Robart’s ruling, Moto will be receiving a comparatively paltry $1,797,554 per year.
http://www.iam-media.com/Blog/Detail.aspx?g=f4a876...
Raqia - Thursday, December 7, 2017 - link
Again, CDMA the voice standard is getting sunset, not 3G/4G which use in parts the same algorithm but are very different patents. Qualcomm is very much responsible for developing the 3G/4G infrastructure we all use and collects royalties from OEMs whose phones participate in the networks they enable.The main difference from Motorola's claim was that they only held a couple of interlacing patents in h.264 vestigial to the standard whereas Qualcomm owns the backbone. Comparisons were made to alternatives and other pools of patents using standard Georgia-Pacific criteria for valuing patents and they were deemed to be worth far less than they were attempting to charge. I believe a similar analysis on Qualcomm's patents could well place their value above what they're charging.
BillBear - Thursday, December 7, 2017 - link
Again, unlike the case with Qualcomm's CDMA patents, all of the LTE patents are FRAND encumbered. This was a requirement for any patent to be included in the LTE standard when it was created.If you refused to agree to license your patents to all takers on a Fair Reasonable, And Non-Discriminatory (FRAND) basis, they used somebody else's IP to create the standard instead of yours.
Once CDMA is sunset and everyone moves to LTE, Qualcomm will no longer be in the position where they have patents that are absolutely necessary if you want a device that works on a network using Qualcomm's CDMA standard (Sprint, Verizon and US Cellular in the United States), with no FRAND restrictions on those patents.
Between recent antitrust findings and court rulings in the past decade on FRAND encumbered patents, along with CDMA use being sunset by the carriers in the not so distant future, the golden goose is slowly waddling away.
Which is why I say that Qualcomm needs a continued investment into a potential higher margin server chip future, because Intel is currently feeling threatened by competition for the first time in forever, and you can be sure they are no longer going to be so complacent.
Raqia - Thursday, December 7, 2017 - link
You still misunderstand, there are NO alternatives used by major carriers: Qualcomm owns the backbone of LTE patents and they will continue to have a substantial presence in 5G standards IP as well. In exchange for FRAND licensing, the industry made many methods that Qualcomm invented standards for the cellular interface with the E&M spectrum in 3G/4G LTE/5G. That FRAND licensing should take into account the economic benefit of standards was recently released as a guideline in the EU and a major point of contention by Apple. Qualcomm keeps its licensing revenue model under this.BillBear - Thursday, December 7, 2017 - link
I don't misunderstand a thing.Qualcomm is moving from a present where they can require any payment they like for the use of their CDMA cell phone patents with no FRAND restrictions at all (patents that are absolutely necessary for devices to work with a subset of carriers), to a future where all their patents on cell phone connectivity standards will be FRAND encumbered.
Court rulings over the past decade make it very clear that FRAND encumbered patents can't be used to price gouge, despite the best efforts of some very well heeled tech giants to do exactly that.
Qualcomm will still make a nice, reasonable profit on FRAND encumbered patents that will most likely be set either by the courts or through binding arbitration, but they will no longer be able to make an absolute killing the way their CDMA patent portfolio has allowed.
They have been in a highly profitable position for decades after creating a standard used by a subset of carriers in the days before modern standards bodies would only use FRAND encumbered patents, but as I say, that golden goose is slowly waddling away.
Raqia - Thursday, December 7, 2017 - link
I guarantee you that the legacy CDMA voice standard that's being sunset is not what Qualcomm is making most of its licensing revenues upon nor is it what it's exerting its influence through. It is its 3G and later FRAND patents that are most valuable today and under the most contention. It receives the bulk of these FRAND licensing fees for these but it is through the precedent of hundreds of voluntary licensing agreements that the notion that the contracts are fair is determined, not one or two licensees who are really after deals going into 5G.BillBear - Thursday, December 7, 2017 - link
I guarantee you that the second US carriers sunset CDMA, Samsung will stop using Qualcomm's SOC and modem in the US versions of their phones and use the SOC and modem they produce in-house instead, just like Samsung does in every other nation.Do you imagine that Samsung uses much more expensive parts in the US for some other reason than Qualcomm's ability to hold those CDMA patents over their head?
Raqia - Thursday, December 7, 2017 - link
We're discussing a different issue now, and I agree Samsung has less incentive to use Qualcomm parts once that happens. SoC manufacturing is generally much lower margin for Qualcomm than its licensing business, so it won't lose much margin on Exynos Galaxy phones off which Samsung still owes Qualcomm a royalty. Its SoCs will still have a major edge in modem speed and integration over the Exynos which use an on package modem that incurs more expense than fabbing it on die.BillBear - Thursday, December 7, 2017 - link
That royalty will be a tiny fraction of what Qualcomm is currently able to demand, as they will no longer be legally allowed to charge a percentage of the retail cost of the entire device when they only have FRAND encumbered patents to leverage.Again, this is exactly what Google tried to do to with Microsoft's XBox after Google bought Motorola and the courts smacked Google down.
>According to the ruling, Microsoft will now have to pay Motorola Mobility $0.00555 for each unit it sells incorporating the H.264 standard, and $0.03471 per unit comprising IEEE 802.11 technology. ***Moto had originally argued for a 2.25% cut of the net selling price of the end product*** incorporating its SEP-protected technology. A chart from Microsoft, reproduced at AllThingsD, suggests that Moto’s original demands would have amounted to an annual payout from Microsoft of $4 billion. After Judge Robart’s ruling, Moto will be receiving a comparatively paltry $1,797,554 per year.
http://www.iam-media.com/Blog/Detail.aspx?g=f4a876...
Raqia - Thursday, December 7, 2017 - link
Totally untrue. The royalty will continue to be assessed on a discount schedule off of a fixed fee and this is the same for any phones using Qualcomm's 3G/4G LTE patents. Furthermore, Samsung indeed used its own chips for the S6 but returned to Qualcomm for the S7 and S8; it is interested in dual sourcing for large volume units as well as capital commitments to its capex high foundry business.If you read the details of the Google/Motorola vs Microsoft trial, the issue was that Motorola's patents were vestigial to the standard involved and it was correctly determined that the value added was minimal. Qualcomm's patents are very much seminal and central to 3G and 4G LTE and the value added is substantial as seen from the difference in price in consumer products like iPod vs iPhone or iPad vs iPad LTE.
BillBear - Thursday, December 7, 2017 - link
Absolutely true.Qualcomm has been in the enviable position of having total control of patents that are absolutely necessary to produce a cell phone that will function with the subset of carriers who used CDMA, with no legal restrictions on what they could demand for the use of those patents.
Qualcomm have been demanding (and getting) a percentage of the entire device, because there were no FRAND restrictions to stop them.
Those days are ending the moment that CDMA gets sunset by the carriers and Qualcomm is only left with FRAND encumbered patents covering cellular connectivity that they leverage.
No more demanding a percentage of the cost of the entire device.
Additionally, Samsung and Apple both show every sign that they will kick Qualcomm to the curb the moment they have to kiss the ring in exchange for access to those CDMA patents.
Qualcomm needs to be thinking about where it's future high revenue streams are going to come from, because that golden goose is slowly waddling away.
Raqia - Thursday, December 7, 2017 - link
What's not FRAND about that? The standard fee they assess is capped at around $10 for 3G/4G LTE, and if your bill of materials is under $300, they'll give you a discount. (5G licensing is similar, you can check on their website for the details.) It will clearly be an OEM's goal to maximize their own margins and characterize the fee schedule as a undeserved cut into technologies that have nothing to do with Qualcomm when in reality it's more of a discount.Dunno about you but I live where museums, theaters, and amusement parks give children, seniors and students a discount and it's done in the name of fairness. Income tax schedules are even progressive, not just proportional also in the name of fairness. Qualcomm gives manufacturers of cheaper phones a discount on the technology needed to participate in the networks they enable, and this is fair since it enables new entrants and competition in the cell phone market.
BillBear - Thursday, December 7, 2017 - link
Are you under the impression that it's your job to prop up Qualcomm's stock price? It sure seems that way.Qualcomm faces major changes in their market position that are going to severely cut into their profit margins in the not so distant future when CDMA is sunset by the carriers and they are left with only FRAND encumbered patents on the basic cellular connectivity every device requires.
That doesn't mean they won't still get a fair and reasonable royalty on their patents that are part of the LTE standard, but the price gouging and it's associated exceptionally high profit margins will end.
Just as Google saw it's demand that Microsoft pay four billion dollars for using FRAND encumbered patents in the XBox reduced to a payout of less than two million dollars by the courts, Qualcomm will no longer be legally allowed to price gouge when it only has FRAND encumbered patents to rely on.
Server chips are a high margin business and Qualcomm's server chip looks pretty good for a first revision, so it behooves Qualcomm to keep up it's investment in that area if they are thinking about the future and Intel waking from it's slumber.
Raqia - Thursday, December 7, 2017 - link
Well it's also not like Steve Jobs put on a turtleneck and said: "...one more thing: we're suing Qualcomm," and I just think far too many people buy the distortive comments by Apple at face value. They made next to no investments into standards and rode licensing past prior top phone makers in just a few years. I also think it's up to the market to determine what a fair rate is rather than a single self interested licensee who has a habit of stealing IP and is not honoring contracts. The patents involved in Qualcomm's dispute are worth far more than those in the Motorola dispute if valued w/ similar methodologies as was employed in the Motorola dispute so I think the outcome for Qualcomm in a similar trial would be quite different. Qualcomm is rather aggressive in monetizing its patents and definitely not a good guy, but I think in recent disputes Apple is behaving far more badly.BillBear - Thursday, December 7, 2017 - link
All the nations where Qualcomm has already lost antitrust cases disagree.Currently in the US, the Federal Trade Commission is also going after them.
>The FTC alleges that Qualcomm has used its dominant position as a supplier of certain baseband processors to impose onerous and anticompetitive supply and licensing terms on cell phone manufacturers and to weaken competitors.
>Qualcomm also holds patents that it has declared essential to industry standards that enable cellular connectivity. These standards were adopted by standard-setting organizations for the telecommunications industry, which include Qualcomm and many of its competitors. In exchange for having their patented technologies included in the standards, participants typically commit to license their patents on what are known as fair, reasonable, and non-discriminatory, or “FRAND,” terms.
>When a patent holder that has made a FRAND commitment negotiates a license, ordinarily it is constrained by the fact that if the parties are unable to reach agreement, the patent holder may have to establish reasonable royalties in court.
>According to the complaint, by threatening to disrupt cell phone manufacturers’ supply of baseband processors, Qualcomm obtains elevated royalties and other license terms for its standard-essential patents that manufacturers would otherwise reject. These royalties amount to a tax on the manufacturers’ use of baseband processors manufactured by Qualcomm’s competitors, a tax that excludes these competitors and harms competition. Increased costs imposed by this tax are passed on to consumers, the complaint alleges.
https://www.ftc.gov/news-events/press-releases/201...
When CDMA tech sunsets and Qualcomm can no longer use it's CDMA patent portfolio as a price gouging weapon, their profit margins are going to plummet. There is nothing Qualcomm can do about it.
Raqia - Thursday, December 7, 2017 - link
You sound convinced it's open and shut, but it's not as monolithic an opinion as you seem to believe it is. Here's a rare written dissent by the acting chairwoman of the FTC in the specific matter of Qualcomm:https://www.ftc.gov/system/files/documents/cases/1...
and a counter statement from the Taiwan ministry of commerce regarding the trade commission's decision to fine Qualcomm:
https://www.reuters.com/article/us-qualcomm-taiwan...
Most of these government acts were the result of lobbying specifically by Apple, and the FTC lawsuit was filed on the eve of the Obama administration's departure, followed by a hasty suit by Apple. This was not a coincidence as Trump will be much more friendly to American IP.
The EU specifically lays out guidance for longer term rules that add transparency but demand calculation of royalties based on economic value added:
https://www.bloomberg.com/news/articles/2017-11-28...
Although Qualcomm will makes some concessions, its basic model will not be affected. Again, it's not about CDMA but 3G/4G LTE and negotiations going into 5G.
BillBear - Thursday, December 7, 2017 - link
Again, all the nations where Qualcomm has already lost disagree with you.Qualcomm is no longer going to be able to price gouge for a percentage of the retail price of a device when it only has FRAND encumbered patents to fall back on.
Raqia - Thursday, December 7, 2017 - link
A FRAND obligation isn't one that just takes into consideration the distorted views of large established OEMs, but it should be fair to all parties in an ecosystem. A low fixed fee with a discount for less intensive IP usage to enable competition is very fair despite some OEMs crying foul on licensing that cedes any ground to competitors. You'll note the EU agrees on some of the most important issues with Qualcomm, Ericsson, Nokia and other patent holders:https://www.reuters.com/article/us-eu-patents/eu-p...
"The Commission’s latest draft no longer has the phrase “licensing for all”, the sources said, a victory for Qualcomm as it removes the obligation on patent holders to provide patent license to all companies asking for them.
A key sentence in an earlier proposal has also been deleted, people said. The sentence said that right holders could not unilaterally set prices according to the way in which a patent is used."
BillBear - Thursday, December 7, 2017 - link
Again, the courts have already found that FRAND encumbered patents cannot be used to discriminate against your competitors and price gouge for a percentage of the retail price of the entire device.Qualcomm has already faced a string of losses in antitrust suites and really don't have a leg to stand on going forward the moment the carriers sunset CDMA.
I expect their revenues to drop precipitously from lowered patent royalties on FRAND encumbered patents vs. their current price gouging on CDMA patents, and from the companies like Samsung and Apple who will have no reason to give them any business going forward after all the bad blood Qualcomm has caused.
Raqia - Thursday, December 7, 2017 - link
Courts have found nothing as you claim, and if anything recent EU statements explicitly exclude prior language lobbied for by Apple to prevent patent holders from unilaterally setting prices the EU. The burden of proof for the use of rebates is not as cut and dry as prior regulatory decisions indicate:https://www.theregister.co.uk/2017/09/07/intel_ant...
Fines based on reasoning like this are open to appeal and closer examination. It's also much more about pricing than the moralistic jargon of Apple would have you believe, so peace to you and I'll be happy to see whatever settlement / court decision they make in this matter have the defacto say in this matter.
ABR - Friday, December 8, 2017 - link
I have to say I know far less of the details of this than either of you, but I did see that Apple basically said, "Yes, we'll pay what you ask," then went out and made massive profits and remade their entire company based on devices using Qualcomm technology, and only after that went back and cried that they were charged too much. They had every opportunity to negotiate in good faith when they made their contracts, or raise the issue with the authorities, but they chose not to. Now that they're a giant with unlimited resources for litigation, they go back and say they don't want to pay. I like Apple's technology and products, but I can't respect this practice at all.varase - Sunday, December 17, 2017 - link
I believe that FRAND terms should limit you to charging a license fee commiserate with the function you deliver to the device - in this case communications, and that that fee should be fixed for all licensees.The problem with Qualcomm licensing is it attempts to charge a percentage of the entire retail price of the device, and extorts handset makes to pay Qualcomm a licensing fee for all the innovation delivered by not only the handset maker, but any material, quality or technological improvement delivered by any other licensor or contributor as well.
There should be a fixed license cost and it should be embedded with the chip, whether it's produced by Qualcomm or some 3rd party - and there it should (by the tenets of patent exhaustion) end. No more "percentage of the retail cost of the device" nonsense which attempts to capitalize on everyone else's work.
To praise Qualcomm for changing extortion level licensing and "give a discount" is like praising a mobster for charging you $1000/day to keep your shop from burning down, and having him give your a discount because you have to live on what he doesn't take. It's simply wrong-headed.
By limiting license fees to the cost embedded in the chip, licensing is kept to a reasonable level since if the fee is excessive, Qualcomm will lose the entire feature phone price level device market as it will be eliminated if licensing fees are too high.
Raqia - Tuesday, December 19, 2017 - link
A fixed fee is indeed still functionally a percentage of the whole sale cost of a device, just higher as cost goes down. It's amazing that people buy Apple's distortion of this licensing model hook, line and sinker when Qualcomm under its FRAND obligations needs to be fair to all licensees and not just the self interested overtures of the biggest consumer electronics company in the world. To enable fairer competition, it gives makers of cheaper phones a discount off of a fixed rate.The license fee enables a network, of which the chip it sells implements only a part. (The specifics of the implementation are a separate fee in the cost of the chip, not the SEP license.) There's patents related to the cell towers as well and substantial IP in the rest of the phone's interfaces with the modem as well. As it owns the patents, Qualcomm should be free to charge what it likes so long as it's FRAND. Indeed, many models you're familiar with offer discounts for the very same service, like theaters, amusement parks for children and seniors as well as software licenses for students.
rocky12345 - Thursday, December 7, 2017 - link
Always great to see new tech and performance uplifts but I do have to question the Gold/silver usage they & a lot of others are now using in their marketing materials. I know Intel I think might have used it first with their Xeon lineup but it seems now everyone thinks oh that is so cool lets do it as well.ZeDestructor - Friday, December 8, 2017 - link
In a nutshell: Asia. They love their precious metals out there.djayjp - Thursday, December 7, 2017 - link
Andrei, you made an error in the table: the Snapdragon 835 only features A73 cores, it is not a big.LITTLE design.djayjp - Thursday, December 7, 2017 - link
Oops my mistake, nvm.serendip - Friday, December 8, 2017 - link
Now we'll have to see DynamIQ being implemented on a midrange SoC like a 66x. I'd love to get 65x-like performance with 62x power consumption.max1001 - Monday, December 18, 2017 - link
They need to increase the l2/l3 cache or at least offer an variant that has lower cache size and charge more.