POST A COMMENT

466 Comments

Back to Article

  • MatthiasP - Tuesday, September 17, 2013 - link

    Wow, first real review on the web AND deep as always, a very nice job from Anand. :) Reply
  • sfaerew - Wednesday, September 18, 2013 - link

    Benchmarks(GFXBench 2.7,3DMark.Basemark X.etc.) are AArch64 version?
    There are 30~40% performance gap between v32geekbench and v64geekbench.
    INT(ST)1471 vs 1065.
    FP(ST)1339 vs 983
    Reply
  • Wilco1 - Wednesday, September 18, 2013 - link

    And Bay Trail Geekbench at 2.4GHz: 1063 (INT), 866 (FP)

    So A7 has beaten BT already by a huge margin despite BT not even being for sale yet...
    Reply
  • TraderHorn - Wednesday, September 18, 2013 - link

    You're comparing 64bit A7 vs 32bit BT. The 32bit #s are dead even. It'll be interesting to see if BT gets a similar performance boost when Win8 64bit versions are released in 1h 2014. Reply
  • Wilco1 - Wednesday, September 18, 2013 - link

    BT's 32-bit result includes hardware accelerated AES, which skews its score (without it, its score is ~936). The 64-bit A7 result does also use hardware acceleration, so it is more comparable.

    Yes BT will get a speedup from 64-bit as well, but won't be nearly as much as A7 gets: its 32-bit result already has the AES acceleration, and x64 nearly isn't as different from x86 as A64 is from A32.

    However the interesting things is that not even in 32-bit A7 wins by a good margin, but that it wins despite running at almost half the frequency of Bay Trail... Forget about Bay Trail, this is Haswell territory - the MacBook Air with the 15W 3.3GHz i7-4650U scores 3024 INT and 3003 FP.

    Now imagine a quad core tablet/laptop version of the A7 running at 2GHz on TSMC 20nm next year.
    Reply
  • smartypnt4 - Wednesday, September 18, 2013 - link

    Why does the frequency matter? If the TDP of the chips are similar (Bay Trail was tested and verified by Anand as using 2.5W at the SoC level under load), who gives a flip about the frequency?

    If Apple wanted to double the frequency of the chip, they'd need something on the order of 4x the amount of power it already consumes (assuming a back-of-the-napkin quadratic relationship, which is approximately correct), putting it at ~6-8W or so at full load. That's assuming such a scaling could even be done, which is unlikely given that Apple built the thing to run at 1.3GHz max. You can't just say "oh, I want these to switch faster, so let's up the voltage." There's more that goes in to the ability to scale voltage than just the process node you're on.

    Now, I will agree that this does prove that if Apple really wanted to, they could build something to compete with Haswell in terms of raw throughput. Next year's A8 or whatever probably will compete directly with Haswell in raw theoretical integer and FP throughput, if Apple manages to double performance again. That's not a given since they had to use ~50% more transistors to get a performance doubling from the A6 to the A7, and building a 1.5B transistor chip is nontrivial since yields are inversely proportional to the number of transistors you're using.

    Next year will be really interesting, though. What with Apple's next stuff, Broadwell, the first A57 designs, Airmont, and whatever Qualcomm puts out (haven't seen anything on that, which is odd for Qualcomm.)
    Reply
  • Wilco1 - Wednesday, September 18, 2013 - link

    Frequency & process matters. Current phones use about 2W at max load without the screen (see recent Nexus 7 test), so the claimed 2.5W just for BT is way too much for a phone. That means (as you explained) it must run at a lower frequency and voltage to get into phones - my guess we won't see anything faster than the Z3740 with a max clock of 1.8GHz. Therefore the A7 will extend its lead even further.

    According to TSMC 20nm will give a 30% frequency boost at the same power. So I'd expect that a 2GHz A7 would be possible on 20nm using only 35% more power. That means the A7 would get 75% more performance at a small cost in power consumption. This is without adding any extra transistors.

    Add some tweaks (like faster memory) and such a 2GHz A7 would be similar in performance as the 15W Haswell in MacBook Air. So my point is that with a die shrink and a slight increase in power they already have a Haswell competitor.
    Reply
  • smartypnt4 - Wednesday, September 18, 2013 - link

    Frequency and process matter in that they affect power consumption. If Intel can get Bay Trail to do 2.4GHz on something like 1.0V, then the power should be fine. Current Haswell stuff tops out its voltage around 1.1V or so in laptops (if memory serves), so that's not unreasonable.

    All of this assumes Geekbench is valid for comparing HSW on Win8 to ARMv8/Cyclone on iOS, which I have serious reservations about attempting to do.

    The other issue I have is this: you're talking about a 50% clock boost giving a 100% increase in performance if we look at the Geekbench scores. That's simply not possible. Had you said "raise the clock to 1.6-1.7GHz and give it 4 cores," I'd be right behind you in a 2x theoretical performance increase. But a 50% clock boost will never yield a 100% increase with the same core, even if you change the memory controller.

    Also, somehow your math doesn't add up for power... Are you hypothesizing that a 2GHz A7 (with 75% of the performance of Haswell 15W, not the same - as per Geekbench) can pull 2.6W while Haswell needs 15W to run that test? Granted, Haswell integrates things that the A7 doesn't. Namely, more advanced I/O (PCIe, SATA, USB, etc.), and the PCH. Using very fuzzy math, you can claim all of that uses 1/2 the power of the chip.

    That brings Haswell's power for compute down to 7-8W, more or less. And you're going to tell me that Apple has figured out how to get 75% of the performance of a 7W part in 2.6W, and Intel hasn't? Both companies have ~100k employees. One is working on a ton of different stuff, and one makes processors, basically exclusively (SSDs and WiFi stuff too, but processors is their main drive). You're telling me that a (relatively) small cadre of guys at Apple have figured out how to do it, and Intel hasn't done it yet on a part that costs ~6x as much after trying to get deep into the mobile space for years. I find that very hard to believe.

    Even with the 14nm shrink next year, you're talking about a 30% power savings for Intel's stuff. That brings the 15W total down to 10.5W, and the (again, super, ridiculously fuzzy) computing power to ~5-6W. On a full node smaller than what Apple has access to. And you're saying they'd hypothetically compete in throughput with a 2.6W part. I'm not sure I believe that.

    Then again, I suppose theoretical bandwidth could be competitive. That's simply a factor of your peak IPC, not your average IPC while the device is running. I don't know enough about the low level architecture of the A7 (no one does), so I'll just leave it here I guess.

    I'm gonna go now... I'm starting to reason in circles.
    Reply
  • Wilco1 - Wednesday, September 18, 2013 - link

    The sort of "simple" tweaks I was thinking of are: an improved memory controller and prefetcher, doubling of L2, larger branch predictor tables. Assuming a 30% gain due to those tweaks, the result is a 100% speedup at 2GHz (1.3 to 2.0 GHz is a 54% speedup, so you get 1.54 * 1.3 = 2.0x perf). The 30% gain due to tweaks is pure speculation of course, however NVidia claims 15-30% IPC gain for similar tweaks in Tegra 4i, so it's not entirely implausible. As you say a much simpler alternative would be just to double the cores, but then your single threaded performance is still well below that of Haswell.

    You can certainly argue some reduction in the 15W TDP of Haswell due to IO, however with Turbo it will try to use most of that 15W if it can (the Air goes up to 3.3GHz after all).

    Yes I am saying that a relative newcomer like Apple can compete with Intel. Intel may be large, but they are not infallible, after all they made the P4, Itanium and Atom. A key reason AMD cited for moving into ARM servers was that designing an ARM CPU takes far less effort than an equivalent performing x86 one. So the ISA does still matter despite some claiming it no longer does.
    Reply
  • smartypnt4 - Wednesday, September 18, 2013 - link

    My point wasn't that Apple can't compete; far from it. If anything, the A7 shows they can compete for the most part. However, what you suggest is that Apple could theoretically have the same performance as Intel on a full node process larger at half the power. I

    have no illusions that Intel is infallible. Stuff like Larrabee and the underwhelming GPU in Bay Trail prove that they aren't. I just seriously doubt that Apple could beat Intel at its own game. Specifically, in CPU performance, which is an area it's dominated for years. It's possible, but I find it relatively unlikely, especially this early in Apple's lifetime as a chip designer.

    On a different note, after looking at the Geekbench results more, I feel like it's improperly weighted. The massive performance improvement in AES and SHA encryption may be skewing the overall result... I need to dig more in to Geekbench before coming to an actual conclusion. I'm also still not convinced that comparing cross-platform results is actually valid. I'd like to believe it is, but I've always had reservations about it.
    Reply
  • Wilco1 - Thursday, September 19, 2013 - link

    The Geekbench results are indeed skewed by AES encryption. The author claimed AES was the only benchmark where they use hardware acceleration when available. There has been a debate on fixing the weighting or to place hardware accelerated benchmarks in a separate category to avoid skewing the results. So I'm hoping a future version will fix this.

    As for cross-platform benchmarking, Geekbench currently uses the default platform compiler (LLVM on iOS, GCC on Android, VC++ on Windows). So there will be compiler differences that skew results slightly. However this is also what you'd get if you built the same application for iOS and Android.
    Reply
  • smartypnt4 - Thursday, September 19, 2013 - link

    A lot of the other stuff in Geekbench seems to be fairly representative, though. Except a few of the FP ones like the blur and sharpen tests...

    It surely can't be hard to have Geekbench omit those results. I think if they did that, you'd see that the A7 is roughly 50-60% faster than the A6 instead of 100% faster, but I'm not sure there. I'd have to go and do work to figure that out. Which is annoying :-)
    Reply
  • name99 - Wednesday, September 18, 2013 - link

    I'd agree with the tweaks you suggest: (improved memory controller and prefetcher, doubling of L2, larger branch predictor tables).

    There is also scope for a wider CPU. Obviously the most simple-minded widening of a CPU substantially increases power, but there are ways to limit the extra power without compromising performance too much, if you are willing to spend the transistors. I think Apple is not just willing to spend the transistors, but will have them available to spend once they ditch 32-bit compatibility. At that point they can add a fourth decoder, use POWER style blocking of instructions to reduce retirement costs, and add whatever extra pipes make sense.
    The most useful improvement (in my experience) would be to up the L1 from being able to handle one load+store cycle to two loads+ one store per cycle, but I don't know what the power cost of that is --- may be too high.

    On the topic of minor tweaks, do we know what the page size used by iOS is? If they go from 4K to 16K and/or add support for large pages, they could get a 10% of so speed boost just from better TLB coverage.
    (And what's Android's story on this front? Do they stick with standard 4K pages, or do they utilize 16 or 64K pages and/or large pages?)
    Reply
  • extide - Wednesday, September 18, 2013 - link

    Those are some pretty generous numbers you pulled out of your hat there. It's not as easy as just do this and that and bam, you have something to compete with Intel Core series stuff. No. I mean yeah, Apple has done a great job here and I wish someone else was making CPU's like this for the Android phones but oh well. Reply
  • name99 - Wednesday, September 18, 2013 - link

    "Now, I will agree that this does prove that if Apple really wanted to, they could build something to compete with Haswell in terms of raw throughput."

    I agree with your point, but I think we should consider what an astonishing statement this is.
    Two years ago Apple wasn't selling it's own CPU. They burst onto the scene and with their SECOND device they're at an IPC and a performance/watt that equals Intel! Equals THE competitor in this space, the guys who are using the best process on earth.

    If you don't consider that astonishing, you don't understand what has happened here.

    (And once again I'd make my pitch that THIS shows what Intel's fatal flaw is. The problem with x86 is not that it adds area to a design, or that it slows it down --- though it does both. The problem is that it makes design so damn complex that you're constantly lagging; and you're terrified of making large changes because you might screw up.
    Apple, saddled with only the much smaller ARM overhead, has been vastly more nimble than Intel.
    And it's only going to get worse if, as I expect, Apple ditches 32-bit ARM as soon as they can, in two years or so, giving them an even easier design target...)

    What's next for Apple?
    At the circuit level, I expect them to work hard to make their CPU as good at turboing as Intel. (Anand talked about this.)
    At the ISA level, I expect their next major target to be some form of hardware transactional memory --- it just makes life so much easier, and, even though they're at two cores today, they know as well as anyone that the future is more cores. You don't have to do TM the way Intel has done it; the solution IBM used for POWER8 is probably a better fit for ARM. And of course if Apple do this (using their own extensions, because as far as I know ARM doesn't yet even have a TM spec) it's just one more way in which they differentiate their world from the commodity ARM world.
    Reply
  • smartypnt4 - Wednesday, September 18, 2013 - link

    @extide: agreed.

    @name99: It is very astonishing indeed. Then again, a high profile company like Apple has no problem attracting some of the best talent via compensation and prestige.

    They've still got quite a long way to match Haswell, in any case. But the throughput is technically there to rival Intel if they wanted to. I would hope that Haswell contains a much more advanced branch predictor and prefetcher than what Apple has, but you never know. My computer architecture professor always said that everything in computer architecture has already been discovered. The question now is when will it be advantageous to spend the transistors to implement the most complicated designs.

    The next year is going to be very interesting, indeed.
    Reply
  • Bob Todd - Wednesday, September 18, 2013 - link

    How many crows did you stuff down after claiming BT would be slower than A15 and even A12? Remember posting this about integer performance?

    "Silverthorne < A7 < A9 < A9R4 < Silvermont < A12 < Bobcat < A15 < Jaguar"

    Apple's A7 looks great, but you've made so many utterly ridiculous Intel performance bashing posts that it's pretty much impossible to take anything you say seriously.
    Reply
  • Wilco1 - Wednesday, September 18, 2013 - link

    BT has indeed far lower IPC than A15 just like I posted - pretty much all benchmark results confirm that. On Geekbench 3 A15 is 23-25% faster clock for clock on integer and FP.

    The jury is still out on A12 vs BT as we've seen no performance results for A12 so far. So claiming I was wrong is not only premature but also incorrect as the fact is that Bay Trail is slower.
    Reply
  • Wilco1 - Wednesday, September 18, 2013 - link

    Also new version with A7 and A57 now looks like this:

    Silverthorne < A7 < A9 < A9R4 < Silvermont < A12 < Bobcat < A15 < Jaguar < A57 < Apple A7
    Reply
  • Bob Todd - Wednesday, September 18, 2013 - link

    Cherry picking a single benchmark which is notoriously inaccurate at comparisons across platforms/architectures doesn't make you "right", it just makes you look like more of a troll. Bay Trail has better integer performance than Jaguar (at near identical base clocks), so by your own ranking above it *has* to be faster than A12 and A15.

    You show up in every ARM article spouting the same drivel over and over again, yet you were mysteriously absent in the Bay Trail performance preview. Here's the link if you want to try to find a way to spin more FUD.

    http://anandtech.com/show/7314/intel-baytrail-prev...

    Apple's A7 looks great, and IT is still the powerhouse of mobile graphics. The A7 version in the iPad should be a beast. None of that makes most of your comments any less loony.
    Reply
  • Wilco1 - Wednesday, September 18, 2013 - link

    If all you can do is name calling then you clearly haven't got a clue or any evidence to prove your point. Either come up with real evidence or leave the debate to the experts. Do you even understand what IPC means?

    For example in your link a low clocked Jaguar is keeping up with a much higher clocked Bay Trail (yes it boosts to 2.4GHz during the benchmark run), so the obvious conclusion is that Jaguar has far higher IPC than Bay Trail. For example Jaguar has 28% higher IPC than BT in the 7-zip test. Just like I said.

    Now show me a single benchmark where BT gets better IPC than Jaguar. Put up or shut up.
    Reply
  • zeo - Wednesday, September 18, 2013 - link

    The point that BT Beats Jaguar, especially at performance per watt, clearly proved the point given!

    And insisting as you are on your original assessment is a characteristic of acting like a Troll... So you're not going to convince anyone by simply insisting on being right... especially when we can point to Anandtech pointing out multiple benchmarks in this article that showed the Kabini performing lower than bother BT and the A7!

    So either learn to read what these reviews actually post or accept getting labeled a Troll... either way, you're not winning this argument!
    Reply
  • Wilco1 - Wednesday, September 18, 2013 - link

    No, Bob's claim was that Bay Trail was faster clock for clock than Jaguar, when the link he gave to prove it clearly showed that is false. BT may well beat Jaguar on perf/watt, but that's not at all what we were discussing.

    So next time try to understand what people are discussing before jumping in and calling people a Troll. And yes I stand by my characterization of various microarchitectures, precisely because it's based on actual benchmark results.
    Reply
  • Bob Todd - Wednesday, September 18, 2013 - link

    IPC as a comparison point made a lot of sense when we were arguing about which 130 watt desktop processor had the better architecture. It seems largely irrelevant for mobile where we care about performance per watt. Your argument is continually that the ARM/AMD designs are 'faster' based on Geekbench. If Jaguar has a 28% higher IPC than Bay Trail, do you honestly think it matters if Bay Trail is still the faster chip @ 1/3 (or less) of the power requirements? If someone came up with a crazy design that needed 5x the clocks to have a 2x performance advantage of their competitor, but did so with half the power budget, they'd still be racking up design wins (assuming parity for all other aspects like price). That's a two way street. If ARM designs a desktop/server focused chip that needs higher clocks than Intel to reach performance parity or be faster than Haswell, but does so with significantly less power it's still a huge win for them. Reply
  • Wilco1 - Wednesday, September 18, 2013 - link

    IPC matters as you can compare different microarchitectures and make predictions on performance at different clock speeds. I'm sure you know many CPUs come in a confusing variation of clockspeeds (and even different base/turbo frequencies for Intel parts), but the underlying microarchitecture always remains the same. You can't make claims like "Bay Trail is faster than Jaguar" when such a claim would only valid at very specific frequencies. However we can say that Jaguar has better IPC than BT and that will remain true irrespectively of the frequency. So that is the purpose of the list of microarchitectures I posted.

    I was originally talking about the performance of Apple A7 and Bay Trail in Geekbench. You may not like Geekbench, but it represents close to actual CPU performance (not rubbish JavaScript, tuned benchmarks, cheating - remember AnTuTu? - or unfair compiler tricks).

    Now you're right that besides absolute performance, perf/W is also important. Unfortunately there is almost no detailed info on power consumption, let alone energy to do a certain task for various CPUs. While TDP (in the rare cases it is known!) can give some indication, different feature sets, methodologies, "dial-a-TDP" and turbo features makes them hard to compare. What we can say in general is that high-frequency designs tend to be less efficient and use more power than lower frequency, higher IPC designs. In that sense I would not be surprised if the A7 also shows a very good perf/Watt. How it compares with BT is not clear until BT phones appear.
    Reply
  • Bob Todd - Wednesday, September 18, 2013 - link

    Your point about benchmarks is actually what surprises me the most nowadays. The biggest thing every in-depth review of a new ARM design brings to light is how freaking piss poor the state of mobile benchmarking is from a software standpoint. I didn't expect magic by the time we got to A9 designs, but it's a little ridiculous that we're still in a state of infancy for mobile benchmarking tools over half a decade after the market really started heating up. Reply
  • Bob Todd - Wednesday, September 18, 2013 - link

    And by "ARM design" I mean both their cores or others building to their ISA. Reply
  • Wilco1 - Thursday, September 19, 2013 - link

    Yes, mobile benchmarking is an absolute disgrace. And that's why I'm always pointing out how screwed up Anand's benchmarking is - I'm hoping he'll understand one day. How anyone can conclude anything from JS benchmarks is a total mystery to me. Anand might as well just show AnTuTu results and be done with it, that may actually be more accurate!

    Mobile benchmarks like EEMBC, CoreMark etc are far worse than the benchmarks they try to replace (eg. Dhrystone). And SPEC is useless as well. Ignoring the fact it is really a server benchmark, the main issue is that it ended up being a compiler trick contest than a fair CPU benchmark. Of course Geekbench isn't perfect either, but at the moment it's the best and fairest CPU bench: because it uses precompiled binaries you can't use compiler tricks to pretend your CPU is faster.
    Reply
  • akdj - Thursday, September 19, 2013 - link

    SO.....what is it the 'crew' is supposed to 'do'? NOT provide ANY benchmarks? Anand and team are utilizing the benchmarks available right now. They're not building the software to bench these devices...they're reviewing them...with the tools available, currently, NOW---on the market. If you're so interested in better mobile benchmarking (still in it's infancy---it's only really been 5 years since we've had multiple devices to even test), why not pursue and build your own benchmarking software? Seems like it may be a lucrative project. Sounds like you know a bit about CPU/GPU and SoC architecture---put something together. Sunspider is ubiquitous, used on any and all platforms from desktops to laptops---tablets to phones, people 'get it'. As well, GeekBench is re-inventing their benchmarking software---as well, the Google Octane tests are fairly new...and many of the folks using these devices ARE interested in how fast their browser populates, how quick a single core is---speed of apps opening and launching, opening a PDF, FPS playing games, et al.
    Again---if you're not 'happy' with how Anand is reviewing gear (the best on the web IMHO), open your own site---build your own tools, and lets see how things turn out for ya!
    Give credit where credit is due....I'd much rather see the way Anand is approaching reviews in the mobile sector than a 1500 word essay without benchmarking results because current "mobile benchmarking is an absolute disgrace"
    YMMV as always
    J

    PS---Thanks for the review guys....again, GREAT Job!
    Reply
  • Bob Todd - Thursday, September 19, 2013 - link

    Umm...I think you missed my point. I love the reviews here. That doesn't change the fact that mobile benchmarking software sucks compared to what we have available on the desktop. That isn't a slam against this site or any of the reviewers, and I fully expect them to use the (relatively crappy) software tools that are available. And they've even gone above and beyond and written some tools themselves to test specific performance aspects. I'm just surprised that with mobile being the fastest growing market, nobody has really stepped up to the plate to offer a good holistic benchmarking suite to measure cpu/gpu/memory/io performance across at least iOS/Android. And no, I don't expect anyone at Anandtech to write or pay someone to write such a tool. Reply
  • akdj - Friday, September 27, 2013 - link

    I completely agree---that said, we're really only 5 years 'in'. The original iPhone in '07, a true Android follow up in late '07/early '08---those were crap. Not really necessary to 'bench' them. We all kinda knew the performance we could expect, same for the next generation or two. In the past three years---Moore's law has swung in to high gear, these are now---literally---replacement computers (along with tablets) for the majority of the population. They're not using their home desktop anymore for email, Facebook, surfing and recipes. Even gaming---unless their @ 'work' and in front of their 'work Dell' from 2006, they're on their smartphones...for literally everything! In these past three years---and it seems Anand, Brian and crew are quite 'up front' about the lack of mobile testing applications and software----we're in it's infancy. 36 real months in with software, hardware and OS'es worth 'testing, benchmarking, and measuring' their performance. Just my opinion....and I suppose we're saying the same thing.
    That said---even Google's new Octane test was and is being used lately---GeekBench has revised their software, it's coming is my point. But just looking at those differences in the generations of iPhones makes it blatantly obvious how far we've come in 4/5 short years. In 2008 and 9---these were still phones with easier ways to text and access the internet, some cool apps and ways to take, manipulate and share pics and videos. Today----they do literally everything an actual computer does and I'd bet---in a lot of cases, these phones are as or more powerful, faster and more acessible than those ancient beige boxes from the mid 2000s a lot of folks have in their home office;)
    Reply
  • Duck <(' ) - Thursday, October 03, 2013 - link

    they are posting false benchmark scores. Same phones score different in youtube vids.
    iPhone 5 browsermark 2 score is around 2300
    check here https://www.youtube.com/watch?v=iATFnXociC4
    sgs4 scores 2745 here https://www.youtube.com/watch?v=PdNE4NoFq8U
    Reply
  • ddriver - Wednesday, September 18, 2013 - link

    Actually, there are quite a lot of discrepancies in this review.

    For starters, the "CPU performance" page only contains JS benchmarks and not a single native application. And iOS and Android use entirely different JS engines, so this is literally a case of comparing apples to oranges.

    Native benchmarks don't compare the new apple chip to "old 32 bit v7 chips" - it only compares the new apple chip to the old ones, and also compares the new chip in 32bit and 64 bit mode. Oddly enough, the geekbench at engadget shows tegra 4 actually being faster.

    Then, there is the inclusion of hardware implementation in charts that are supposed to show the benefits of 64bit execution mode, but in reality the encryption workloads are handled in a fundamentally different way in the two modes, in software in 32bit mode and implemented in hardware in 64bit mode. This turns the integer performance chart from a mixed bad into one falsely advertising performance gains attributed to 64bit execution and not to the hardware implementations as it should. The FP chart also shows no miracles, wider SIMD units result in almost 2x the score in few tests, nothing much in the rest.

    All in all, I'd say this is a very cleverly compiled review, cunningly deceitful to show the new apple chip in a much better light than it is in reality. No surprises, considering this is AT, it would be more unexpected to see an unbiased review.

    I guess we will have to wait a bit more until mass availability for unbiased reviews, considering all those "featured" reviews usually come with careful guidelines by the manufacturer that need to be followed to create an unrealistically good presentation of the product. That is the price you have to pay to get the new goodies first - play by the rules of a greedy and exploitative industry. Corporate "honesty" :)

    I don't say the new chip is bad, I just say it is deliberately presented unrealistically good. Krait has expanded the SIMD units to 128 bit as well, so we should see similar performance even without the move to a 64bit ecosystem. Last but not least, 64bit code bloats the memory footprint of applications because of pointers being twice as big, and while those limited memory footprint synthetic benches play well with the single gigabyte of ram on this device, I expect an actual performance demanding real world application will be bottlenecked by the ram capacity. All in all, the decision to go for 64 bit architecture is mostly a PR stunt, surely, 64bit is the future, but in the case of this product, and considering its limited ram capacity, it doesn't really make all that sense, but is something that will no doubt keep up the spirit of apple fanboys, and make up for their declining sales while they bring out the iphone 6, which will close all those deliberately left gaping holes in the 5s.
    Reply
  • Slaanesh - Wednesday, September 18, 2013 - link

    Interesting comment. I'd like to know what Anand has to say about this. Reply
  • ddriver - Wednesday, September 18, 2013 - link

    I am betting my comment will most likely vanish mysteriously. I'd be happy to see my concerns addressed though, but I admit I am putting Anand in a very inconvenient position. Reply
  • Mondozai - Wednesday, September 18, 2013 - link

    It's all a conspiracy. In fact your comment has already disappeared, but in its place is now a hologram effect that makes it impossible to tell it from the blank space. So why put in the hologram and not just delete it? Because Anand is playing mind games with us.
    And who said I typed this comment? It could have been someone else, someone doing Anand's bidding.

    I admit I am putting his scheme of deception in a very difficult position right now.

    /s
    Reply
  • ddriver - Wednesday, September 18, 2013 - link

    Few days ago I posted a comment criticizing AT moderators being idle and tolerant of the "I make $$$ sitting in front of my mac" spam, few minutes later my comment was removed while the spam remained, which led me to expect similar fate for this comment. Good thing I was wrong ;) Reply
  • WardenOfBats - Wednesday, September 18, 2013 - link

    I'd honestly like to see your misleading comment removed as well. You seem to think that anyone cares what the 32bit performance is of the 5S when everyone knows damn well that app developers are going to be clamouring to switch to the 64bit tech (that Android doesn't even have or support) to get these power increases. Other than that, the rest of your comment is just nonsense. The whole point is that the iPhone 5S is faster and the fact that they use different JS engines is a part of that. Apple just knows how to make software optimizations and hardware that runs them faster and you can see how they just blow the competition away. Reply
  • CyberAngel - Thursday, September 19, 2013 - link

    Misleading? Yes! In favor of Apple!
    You need to double the memory lines, too and caches and...oh boy!
    The next Apple CPU will be "corrected" and THEN we'll see...hopefully RAM is 8GB...
    Reply
  • Ryan Smith - Thursday, September 19, 2013 - link

    As we often have to remind people, we don't delete comments unless they're spam. However when we do so, any child comments become orphaned and lose their place in the hierarchy, becoming posts at the end of the thread. Your comment isn't going anywhere, nor have any of your previous comments. Reply
  • CyberAngel - Thursday, September 19, 2013 - link

    Hurray! Reply
  • name99 - Wednesday, September 18, 2013 - link

    I imagine he would say "I heard exactly the same shit about the A6 and how it couldn't possibly be as good as I claimed. Come back when you have NUMBERS to back up your complaints." Reply
  • monaarts - Wednesday, September 18, 2013 - link

    You are mistaking a more natural transition to a 64-bit mobile world with a "PR stunt." Yes, Android will probably make a huge jump and switch to 64-bit and include 8GB of memory, or something crazy like that, but will only add to fragmentation they are already burdened with. However, Apple is trying to avoid that by building steps that lead to where they eventually want the iPhone to go. Reply
  • ddriver - Wednesday, September 18, 2013 - link

    The PC market is fragmented as well, but is still goes, doesn't it? Surely, as a bigger ecosystem, android will be slower to adopt changes. But HPC doesn't really make sense in a phone, that is why most ARM chip vendors are focusing on server v8 chips and infrastructure, which will be more lucrative than consumer electronics. v8 is the future, no doubt about it, but apple has no other winning hand besides offering v8 a little too early, knowing they can make up for the cost of the premature transition with profit margins other manufacturers cannot dream of asking for the same hardware. Surely, other brands have their fanboys too, but nowhere nearly as fanatically devoted and eager to "just take my money". Reply
  • tbrizzlevb - Wednesday, September 18, 2013 - link

    I've never had any problem with this "fragmentation" that you cut and pasted from somewhere. How exactly has that kept you from buying Android? Do you normally keep your phone for 5 years or more? I'd be willing to bet you aren't using an iPhone 3 right now.. If you upgrade your phone every few years anyways, what do you care if the first generation Droid doesn't run the latest OS update? Reply
  • AaronJ68 - Wednesday, September 18, 2013 - link

    The fact that there was never an iPhone 3 might have something to do with that. Reply
  • CyberAngel - Thursday, September 19, 2013 - link

    As a programmer I say there is a HUGE fragmentation problem with all the Androids.
    I code for Jelly Beans only and I do have an "old" 4.0 mono CPU device for testing the lag.
    Reply
  • Focher - Wednesday, September 18, 2013 - link

    You lost me when you used the phrase "apple fanboys" and "declining sales". You don't seem to understand the difference between market share versus unit sales. Reply
  • ddriver - Wednesday, September 18, 2013 - link

    So you must have taken "apple fanboys" personally, and understandably "declining sales" in a context regarding apples conflicts you. Like all other vendors, apple sales drop, thus the periodic refreshes, which do need their selling points, this time around it is v8. In the past, apple resorted to similar strategies, like making exclusive deals, purchasing the entire initial batches of new generation parts, and again making up for the cost of this "innovation" with their profit margins.

    I don't recall any other brand which pushed kids to selling their organs, do you? If that is not fanatical fanboyism, I don't know what it is...
    Reply
  • akdj - Thursday, September 19, 2013 - link

    Weird---seems to me EVERY time an iPhone is released, it sells MORE than the previous version FASTER! In fact, it's never been surpassed by another electronic item in history...and it's sales continue to climb. News Flash! Android (High end; S4/Note/HTC 1/XPeria) also sell @ a premium and almost @ the exact price of Apple's handsets. The rest of your post is nonsense. It's not just Anand's site and review praising the performance of the A7---it's ubiquitous. Is Apple paying Ars? TechCrunch? MacRumors? WSJ? Engadget? CNet?
    Dude---this is one HELL of an SoC. It's so much MORE than a 64bit chess match. A company licensing and building from the ground up a CPU/GPU/IPU that matches their OS and allows the tools to developers (For Free!) in XCode so the transition is seamless! The speed of this chip is awesome. Setting themselves up early is smart---the iPad release is imminent and has competition from both sides...Microsoft and Android. With a significantly larger 'body' and area to allow for more RAM...and future releases of other ARM based products, Apple is making in-roads within the mobile sector that ONLY bozos that cherish other brands, OEMs, or otherwise have some weird bone to pick with Apple can't realize. Mind blowing. I'm brand agnostic---use OSx, Windows and even own a couple of Android devices....but to me as dismissive as you are about the A7s build and putting it in a product the size of a pack of smokes is about as silly and should I say 'ignorant' a stance as I've seen in a long LONG time.
    This move by Apple is HUGE. Moore's law now attributed to mobile. Still a dual core. Still 1GB of RAM (albeit DDR3 vs DDR2)---yet doubling, tripling, sometimes quadrupling performance of the other 'off the shelf' SoCs on the market that other OEMs are using. It's no secret---Apple has been hiring and head hunting chip designers from Intel and AMD for some time now. These guys and gals are some of the brightest minds on earth....but you've got it all figured out and somehow Anand's been blinded by the conspiracy---as has EVERY other reviewer on the 'net.
    Un-Believable.
    J
    Reply
  • ddriver - Wednesday, September 18, 2013 - link

    I mean, only a true apple fanboy is capable of disregarding all that technical argumentation because of the mention of the term "apple fanboys". A drowning man will hold onto a straw :) Reply
  • akdj - Thursday, September 19, 2013 - link

    You consider your comment 'technical argumentation'? It's not....it's your 'opinion'. I think you can rest assured Anand's site is geared much more to those of us interested in technology and less interested in being a 'fanboy'. In fact....so far reading through the comments, you're the first to bring that silly cliché up, "Fan Boy".
    A drowning man will hold on to anything to help save himself :)
    Reply
  • Wilco1 - Wednesday, September 18, 2013 - link

    Good comment - I'm equally unimpressed by the comparison of a real phone with a Bay Trail tablet development board which has significantly higher TDP. And then calling it a win for Bay Trail based on a few rubbish JS benchmarks is even more ridiculous. These are not real CPU benchmarks but all about software optimization and tuning for the benchmark.

    Single threaded Geekbench 3 results show the A7 outperforming the 2.4GHz Bay Trail by 45%. That's despite the A7 running at only 54% of the frequency of Bay Trail! In short, A7 is 2.7 times faster than BT and on par/better than HasWell IPC...
    Reply
  • tech4real - Wednesday, September 18, 2013 - link

    not trying to dismiss A7's cpu core, it's an amazing silicon and significantly steps up against A6, but is there a possibility that the geekbench3 is unfit to gauge average cross-ISA cross-OS cpu performance... To me, the likelihood of this is pretty high. Reply
  • Wilco1 - Wednesday, September 18, 2013 - link

    Comparing different ISAs does indeed introduce inaccuracies due to compilers not being equal. Cross OS is less problematic as long as the benchmark doesn't use the OS a lot.

    It's a good idea to keep this in mind, but unfortunately there is little one can do about it. And other CPU benchmarks are not any better either, if you used SPEC then performance differences across different compilers are far larger than Geekbench (even on the same CPU the difference between 2 compilers can be 50%)...
    Reply
  • Dooderoo - Wednesday, September 18, 2013 - link

    "The AES and SHA1 gains are a direct result of the new cryptographic instructions that are a part of ARMv8. The AES test in particular shows nearly an order of magnitude performance improvement".

    Your comment: "in reality the encryption workloads are handled in a fundamentally different way in the two modes [...] a mixed bad into one falsely advertising performance gains attributed to 64bit execution and not to the hardware implementations as it should"

    Maybe actually read the article?

    "The FP chart also shows no miracles, wider SIMD units result in almost 2x the score in few tests, nothing much in the rest"
    Exclude those test and you're still looking at 30% improvement. 30% increase in performance from a recompile counts at "nothing much" in what world?
    Reply
  • ddriver - Wednesday, September 18, 2013 - link

    My point was encryption results should not have been included in the chart and presented as "benefits of 64bit execution mode" because they aren't.

    Also those 30% can easily be attributed to other incremental upgrades to the chip, like faster memory subsystem, better prefetchers and whatnot. Not necessarily 64bit execution, I've been using HPC software for years and despite the fact x64 came with double the registers, I did not experience any significant increase in the workloads I use daily - 3D rendering, audio and video processing and multiphysics simulations. The sole benefit of 64bit I've seen professionally is due to the extra ram I can put into the machine, making tasks which require a lot of ram WAY FASTER, sometimes 10s even 100s times faster because of the avoided swapping.

    Furthermore, I will no longer address technically unsubstantiated comments, in order to avoid spamming all over the comment space.
    Reply
  • Dooderoo - Wednesday, September 18, 2013 - link

    "Furthermore, I will no longer address technically unsubstantiated comments, in order to avoid spamming all over the comment space."
    Man, you give up too easily.

    Encryption results are exactly that: "benefits of 64bit execution mode". Why? 32-bit A32 doesn't have the instructions, 64-bit A64 does. Clear and obvious benefit.

    "30% can easily be attributed to other incremental upgrades to the chip". Wouldn't the 32-bit version benefit from those as well?

    I'm beginning to think you don't understand that those results are both from the A7 SOC, once run with A32 and once with A64.
    Reply
  • ddriver - Wednesday, September 18, 2013 - link

    ""30% can easily be attributed to other incremental upgrades to the chip". Wouldn't the 32-bit version benefit from those as well?"

    This may be correct. Unless I am overlooking execution mode details, of which I am not aware, and I expect neither are you, unless you are an engineer who has worked on the A7 chip. I don't think that data is available yet to comment on it in detail.

    But you are not correct about encryption results, because it is a matter of extra hardware implementation. It is like comparing software rendering to hardware rendering, a CPU with hardware implementation of graphics will be immensely faster at a graphics workload, even if it is the same speed as the one that runs graphics in software. If anything, the architecture upgrades of the A7 chip can at best result in 2x peak theoretical performance improvement, while the AES test shows 8+x improvement. This is because the performance boost is not due to 64 bit mode execution, but due to the extra hardware implementation that is exclusively available in that mode.
    Reply
  • Dooderoo - Wednesday, September 18, 2013 - link

    "I don't think that data is available yet to comment on it in detail."
    Yet you're ok with calling the article "cunningly deceitful"? Weird.
    Reply
  • ddriver - Wednesday, September 18, 2013 - link

    My basis for this conclusion is how the article is structured, the carefully picking of benchmarks and selective comparisons. This is clearly visible and has nothing to do with the actual chip specifications. It has nothing to do with execution mode specific details. So no, I don't have problem with facts, unlike you.

    Furthermore, that 30% number you were focused on is hardly impressive and proportional to the claims this article is making. In a workload that would take an hour, 30% is a noticeable improvement, but for typical phone applications this is not the case.
    Reply
  • Dooderoo - Wednesday, September 18, 2013 - link

    The structure of the article and the benchmarks are mostly the same as they use in most reviews, excluding some Android specific benchmarks. Where exactly do you see "carefully picking of benchmarks and selective comparisons". Put differently: what benchmarks should they include to convince you there is no "cunning deceit" at work?

    What claims in the article are not proportional with the 30% (actually more) performance gain?

    I won't even comment on the "not a noticeable improvement" bit.
    Reply
  • andrewaggb - Wednesday, September 18, 2013 - link

    My issue with all the benchmarks is that they are mostly synthetic. The most meaningful benchmarks are the applications you plan to use and the usage patterns you are targeting. Synthetics are fascinating, but I think it's generally a mistake to buy anything based on them. Reply
  • notddriver - Thursday, September 19, 2013 - link

    Um, so if a 30% improvement is hardly impressive and irrelevant to phones, then isn't the entire concept of reviewing phones on the basis of hardware performance also irrelevant? Which would make your complaints about the biased-yet-insignificant-review as vital as a debate over whether Harry Potter or Spiderman would be better at defending Metropolis.

    Incidentally, my iPhone 5 is powerful enough that I never notice any issues—as I'm sure the last generation of Android phones would be. But if your going to go to town over a dozen or more comments about a topic, at least pretend that it matters a tiny bit. Just good form.
    Reply
  • oRdchaos - Wednesday, September 18, 2013 - link

    I've seen people all over the web get very worked up about people's phrasing with regard to 64-bit. Would you prefer the title of the section were "Performance gains from a 64-bit architecture and the new ARMv8 instruction set"? People keep arguing that 64-bit in a vacuum doesn't give much performance gain. But there is no vacuum.

    I think the article is very clear to point out where gains are from additional instructions, versus a doubling of the register bit width, versus improved memory subsystem/cache. I'm sure when they get chances to write more of their own tests, they'll be able to pinpoint things further.
    Reply
  • sfaerew - Wednesday, September 18, 2013 - link

    engadget is multi-thread geekbench performance. tegra4 4cores vs A7 2cores Reply
  • Spoony - Wednesday, September 18, 2013 - link

    - You are correct, there are no native cross-platform benches used. Which ones do you suggest Anandtech use? We all know Geekbench is essentially meaningless across platforms.

    - If you are talking about this engadget review: http://www.engadget.com/2013/09/17/iphone-5s-revie... It appears that Nvidia SHIELD (Tegra 4) led in only one benchmark out of six. This makes your statement incorrect. LG G2 is more competitive. Need we repeat how inaccurate Geekbench is cross-platform. It is as apples-to-oranges as the JS tests.

    - I believe what Anandtech was attempting to show with the encryption was the difference ARMv8 ISA makes. In fact the title of that somewhat sensational chart is "AArch64 vs. AArch32 Performance Comparison". So while you are right, the encryption tests are handled in a fundamentally different way, that way is part of the ISA and is an advantage of AArch64, and thus is valid in the chart.

    - It will be curious to see whether Qualcomm can deliver A7 like performance using ARMv7 with extended features. My position is no, which is the whole point of that entire page of the review. ARMv8 is actually enabling some additional performance due to ISA efficiency and more features.

    - I think noticeable memory footprint bloat of a 64-bit executable is completely ridiculous. But to see if I was right, I did some testing. It's getting a bit hard to find fat binaries to take apart these days, most things are x86_64 only. But I found a few. I computed for three separate applications, took the average, and it looks like about a 9% increase in executable size. Considering that executables themselves are a tiny part of any application's assets, I think it is completely insubstantial. If you calculate the increase in executable size versus the size of the whole application package, it averages to a less than 1% increase.

    I too am a bit sad that Apple didn't increase the RAM, and also equally sad that connectivity was left out this rev. I continue to be sad that there is not a more serious storage controller inside the phone. You make some valid points, but I think you also make some erroneous ones. The question with phone SoCs is: Is this a well balanced platform along the axes of performance (GPU and CPU), power consumption (thus heat), and features. I believe that the A7 is well calibrated. Obviously Qualcomm is also doing great work, and perhaps their SoCs are equally as well calibrated.
    Reply
  • ddriver - Wednesday, September 18, 2013 - link

    - This is entirely his decision, considering writing those reviews is his job, not mine. He can either use actual native benchmarks which reflect the performance of the actual hardware, or call it JS VM performance instead of CPU performance, because different JS implementations across platforms are entirely meaningless.

    - there is only one geekbench test at engadged. That is what I said "geekbench" - I did not imply it was faster in all tests in the engadged review, I don't know why you insinuate I did so. IIRC snapdragon 800 is actually a little slower in the CPU department than tegra 4, and only faster in the graphics department.

    - the boost in encryption is completely disproportional to other improvements and is due to hardware implementations, not 64bit execution mode. So, if anything, it should be a graph or chart on its own, instead of using it to bulk up the chart that is supposed to be indicative of integer performance improvements in 64bit mode.

    - maybe v7 chips with 128 bit SIMD units will not deliver quite the performance of A7, because there is more to the subject than the width of the registers (the number of registers doesn't really matter that much), like supported instructions. At any rate, v7 chips are still quad core, which means 4x128bit SIMD units compared to the 2 on A7, albeit with a few extra supported instructions. Until any native benchmark that guarantees saturation of SIMD units pops out, it will be a foolish thing to make a concrete statement on the subject. But boosting v7 SIMD units to 128bit width will at least make it competitive in number crunching scenarios, which use SIMD 99% of the time.

    - this is very relative, you can store the same data in three different containers and get a completely different footprint - a vector will only use a single pointer, since it is continuous in memory, a forward list will use a pointer for every data element, a linked list will use two. Depending on the requirements, you may need faster arbitrary inserts and deletions, which will require a linked list, and in the case of a single byte datatype, a 32bit single element will be 12 bytes because of padding and alignment, while in 64bit mode the size will grow to 24 bytes, which is exactly double. Granted, this is the other extremity of the "less than 1%" you came up with, truth is results will vary in between depending on the workload.

    As I said in the first post, the wise thing would be to reserve judgement until mass availability, mostly because I know corporate practices involving exclusive reviews prior to availability, which are a pronounced determining factor to the initial rate of sales. In short, apple is in the position to be greatly rewarded for imposing some cheating requirements on early exclusive reviewers. And at least in this aspect I think everyone will disagree, apple is not the kind of company to let such an opportunity go to waste.
    Reply
  • ddriver - Wednesday, September 18, 2013 - link

    *no one will disagree Reply
  • Dug - Wednesday, September 18, 2013 - link

    I will.
    "As I said in the first post, the wise thing would be to reserve judgement until mass availability, mostly because I know corporate practices involving exclusive reviews prior to availability, which are a pronounced determining factor to the initial rate of sales. In short, apple is in the position to be greatly rewarded for imposing some cheating requirements on early exclusive reviewers. And at least in this aspect I think everyone will disagree, apple is not the kind of company to let such an opportunity go to waste."

    Prove it and stop making assumptions.
    Reply
  • BrooksT - Wednesday, September 18, 2013 - link

    Nobody will disagree because you've completely destroyed your credibility by insulting the credibility, integrity, and competence of the reviewer, the site, and Apple because the evidence doesn't conform to your speculations and bias. You are not to be taken seriously, and at this point I think everyone sees that.

    Post evidence of this conspiracy or STFU.
    Reply
  • ddriver - Thursday, September 19, 2013 - link

    How a whiff of reality for you - my credibility is and has not been on the line on this one. You don't know who I am, you don't know my credentials. This is not the case for Anand, even if I am right he is not in the position to admit to compiling the review in a manner that creates an unrealistically good presentation of a product, because unlike for me, that would be a huge credibility calamity for him. If anything, his responses are very "political" carefully dancing around the pivot points of my concerns. While his response did partially bring light to a few of my concerns, my key points remain valid - the article continues to not compare A7 with ARMv7 head to head in the sole native CPU benchmark present in the article, "CPU performance" was not renamed to JS performance or moved to browser performance or something like that. See, just because he didn't agree with my points and admit to being biased does not mean I am wrong and that is not the case, considering he is not in the position to do that. I didn't really expect anything more or less than the same "carefully dancing" answer as the article itself, my main motivation was to show him that not all AT readers are incapable of reading between the lines, for the sake of future articles, I did not expect that he would make any revision to the article at hand. Honesty is for those who have nothing to lose, and while his credibility is no the line, my isn't, make the conclusions, if you can ;) Reply
  • CyberAngel - Thursday, September 19, 2013 - link

    Don't worry! I believe you...conditionally!
    I put it this way: I greatly doubt that the tests would reveal any points that are less than favorable the Apple. ANY company would do the same: promote the best parts and highlight the strength of the product.
    Reply
  • akdj - Thursday, September 19, 2013 - link

    "You don't know who I am, you don't know my credentials."
    I'm not sure anyone here is interested---you've already made clear you're a conspiracy theorist, that you believe Apple is paying off reviewers, that you disrespect folks MUCH more intelligent than yourself when it comes to chip architecture...and that your "main motivation was (Is) to show him that not all AT readers are incapable of reading between the lines". You've shown NO one ANYthing substantiated. You continue to argue baseless facts and accuse respected individuals and groups/teams of intelligent members of being bias towards Apple. Nothing in this review supports your claims---NOTHING! And, as I pointed out earlier---even the biggest anti-apple sites are applauding Apple's efforts with this SoC effort.
    You're in the minority---and to be so vain that we would care about who you are and what your credentials are is silly. It sounds to me like you're a 17 year old with a decent vocabulary and not enough paper in the pocket to pick up an iPhone 5s for yourself. But...what do I know. I don't know you, your credentials...or how you lean politically, nor do I care.
    IMO---you're an insult to the entire Anand crew. I'm not sure why I continue to read your responses, they're all the same, just worded differently. Again...you're in the (extreme) minority. You're certainly not an engineer, chip designer, app developer or technological guru---if you were, you would understand the feat Apple has achieved with this SoC architecture.
    J
    Reply
  • Nurenthapa - Friday, September 20, 2013 - link

    I've been enjoying reading this in China, but you, sir, are really annoying me with your sniveling drivel. You have an axe to grind and simply won't shut up. Hope you disappear from this forum. BTW, I use a HTC One and iPad 2, and occasionally my old original 2007 iPhone. I love IOS and iPhones, but won't be buying one until they come out with a somewhat bigger screen. Reply
  • oryades - Wednesday, September 18, 2013 - link

    Intel, now Apple, the same featured reviews. Reply
  • edward kuebler - Wednesday, September 18, 2013 - link

    We are talking about 64 bits too much. The story is new instruction set in ARMv8. Instead complicating the hardware for backwards compatibility (e.g. look at x86 still supporting 16bit code) they wrote a new instruction set faster and less energy demanding. There is still ARMv7 compatibility, but the 64bit mode is independent. And the thing is, once you redesign your architecture, why not go 64bit? what´s the point of staying 32 bit? Moving more data is both slower and faster. More and wider registers help compiler optimizations and media decoding. I didn't get all this “cunning deceitful conspiracy” feeling you talk about. Staying in 32 bit land, *that* would keep me guessing. Reply
  • Anand Lal Shimpi - Wednesday, September 18, 2013 - link

    Our browser based suite (stressing js/HTML5 and other browser based workloads) remains unchanged from all of the other mobile SoC reviews we've done. There's no way of getting around the software differences on these mobile devices as you buy hardware+software together. Unfortunately it's still our best option for non-GPU cross platform comparisons, there just aren't many good cross platform CPU tests.

    I called out the inclusion of hardware accelerated AES/SHA when referencing those tests, there were no attempts to hide that fact. The fact remains that those algorithms will see a speedup on ARMv8 hardware because of those instructions. Note this is no different than when we run the TrueCrypt benchmarks on AES-NI enabled processors vs. those that don't have it (e.g. http://images.anandtech.com/graphs/graph5626/44765...

    Apple provided absolutely zero guidelines on how the review was to be conducted. The only stipulations were around making sure we didn't disclose the fact that we had devices. In fact, most manufacturers don't - at least not with us. Whenever there are any stipulations presented, we always disclose them on the site (e.g. see our early look at Trinity desktop performance).

    Krait implements ARMv7, so that's 64-bit wide registers for its NEON units. It expanded the width of the execution units, but the registers themselves have to adhere to the ARMv7 ISA.

    I think we explained why 64-bit makes sense (doing so at the last minute doesn't make sense, immediate SIMD/Crypto perf increases today, and helps build up the ecosystem), and even highlighted cases where a performance degradation does happen (see: Dijkstra tests). Keep in mind that iOS has always erred on the side of being more thrifty with RAM to begin with. I would like to see more but I don't know how necessary it is today.

    Take care,
    Anand
    Reply
  • ddriver - Wednesday, September 18, 2013 - link

    Anand, maybe you should hire a developer to write native cross platform benchmark tools. This is the only way to avoid all caveats like sponsored exclusive optimizations, different implementations, eliminate unrealistic low footprint synthetics, "selective compilers" (*cough Intel*) and whatnot. Considering the amount of reviews you are doing and the fact that C/C++ compilers have caught up with ARM for a long time, this is nothing hard and something that entirely makes sense, especially relative to using different JS engine implementations to measure CPU performance. JS should go in the "browser" department, not CPU performance.

    According to wikipedia, Krait implements 128bit SIMD, so maybe that is a mistake on wikipedia's behalf?

    I still think encryption results belong in their own chart, and have no place in a chart that is supposed to be indicative of the integer performance delta between 32 and 64bit execution modes. Even with the clarification you made, it creates an unrealistic impression, not to mention some people skimp over the text and only look at the numbers. Encryption is encryption, integer performance is integer performance. Why mix the two (except for the reason I already mentioned and you deny)?

    I wish you'd reflected a bit on the marketing aspect of the transition to 64, considering how much apple is riding it this time around. No one argues 64bit is good and more performance is good, but this brings up the issue of the particular implementation, e.g. a fast chip with only a single gigabyte of ram, and how will that play out with an actual performance demanding real world application.

    Thanks for addressing my concerns.
    Reply
  • Wilco1 - Wednesday, September 18, 2013 - link

    ARMv7 has 32 64-bit SIMD registers but they can also be used as 16 128-bit SIMD registers. Modern CPUs like Cortex-A15 and Krait support many 128-bit SIMD operations in a single cycle, but not all operations are supported (such as double precision FP). ARMv8 has 32 128-bit SIMD registers and supports SIMD of 2 64-bit doubles. Reply
  • Dug - Wednesday, September 18, 2013 - link

    "maybe you should hire a developer to write native cross platform benchmark tools"
    WHY? It is not going to make any difference. Developers aren't writing native cross platform programs. If they can take advantage of anything that's in the system, then show it off.
    That would be like telling car manufacturers to redesign a hybrid to gas only to compare with all the other gas only cars.
    Reply
  • ddriver - Wednesday, September 18, 2013 - link

    "Developers aren't writing native cross platform programs"

    Maybe it is about time you crawl from under the rock you are living under... Any even remotely concerned with performance and efficiency application pretty much mandates it is a native application. It would be incredibly stupid to not do it, considering the "closest" to native language Java is like 2-3 times slower and users 10-20 times as much memory.
    Reply
  • Dug - Wednesday, September 18, 2013 - link

    Exactly my point! "native cross platform" Each cross-platform solution can only support a subset of the functionality included in each native platform.

    It doesn't get you anywhere to produce a native cross platform benchmark tool.

    Again you have to mitigate to names and snide comments because you are wrong.
    Reply
  • ddriver - Wednesday, September 18, 2013 - link

    What you talk about is I/O, events and stuff like that. When it comes to pure number crunching the same code can execute perfectly well for every platform it is complied against. Actually, some modern frameworks go even further than that and provide ample abstractions. For example, the same GUI application can run on Windows, Linux, MacOS, iOS and Android, apart from a few other minor platforms. Reply
  • Anand Lal Shimpi - Wednesday, September 18, 2013 - link

    Ultimately the benchmarking problem is being fixed, just not on the time scale that we want it to. I figured we'd be better off by now, and in many ways we are (WebXPRT, Browsermark are both steps in the right direction, we have more native tools under Android now) but part of the problem is there was a long period of uncertainty around what OSes would prevail. Now that question is finally being answered and we're seeing some real investment in benchmarks. Trust me, I tried to do a lot behind the scenes over the past 4 years (some of which Brian and I did recently) but this stuff takes time. I remember going through this in the early days of the PC industry too though, I know how it all ends - it'll just take a little time to get there.

    Actually I think 128-bit registers might've been optional on v7.

    The only reason encryption results are in that table is because that's how Geekbench groups them. There's no nefarious purpose there (note that it's how we've always reported the Geekbench results, as they are reported in the test themselves).

    In my experience with the 5s I haven't noticed any performance regressions compared to the 5/5c. I'm not saying they don't exist and I'll continue to hunt, it's just that they aren't there now. I believe I established the reasoning for why you'd want to do this early, and again we're talking about at most 12 months before they should start the move to 64-bit anyways. Apple tends to like its ISA transitions to be as quick and painless as possible, and moving early to ARMv8 makes a lot of sense in that light. Sure they are benefiting from the marketing benefits of having a feature that no one else does, but what company doesn't do that?

    I don't believe the move to 64-bit with Cyclone was driven first and foremost by marketing. Keep in mind that this architecture was designed when a bunch of certain ex-AMDers were over there too...

    Take care,
    Anand
    Reply
  • BrooksT - Wednesday, September 18, 2013 - link

    Why would Anand write cross-platform benchmarks that have no connection to real world usage? Especially when you then complain that the 64 bit coverage isn't real world enough? Reply
  • ddriver - Wednesday, September 18, 2013 - link

    For starters, putting the encryption results in their own graph, like every other review before that, and side to side comparison between geekbench ST/MT scores for A7 and competing v7 chips would be a good start toward a more objective and less biased article.

    And I know I am asking a lot, but an edit feature in the comment section is long overdue...
    Reply
  • TheBretz - Wednesday, September 18, 2013 - link

    For what it's worth this is NOT a case of LITERALLY comparing "Apples" and "Oranges" - it is a case of comparing "Apple" and many other manufacturers, but there was no fruit involved in the comparison, only smarthphones and tablets. Reply
  • ddriver - Wednesday, September 18, 2013 - link

    Apples to oranges is a figure of speech, it has nothing to do with the company apple... It concerns comparing incomparable objects which is the case of completely different JS implementations on iOS and Android. Reply
  • Arbee - Wednesday, September 18, 2013 - link

    Please name any case when AT's benchmarks and reviews have been proven to be biased or inaccurate. There's a reason the writers at other sites consider AT the gold standard for solid technical commentary (Engadget, Gizmodo, and the Verge all regularly credit AT on technical stories). As far as bias, have you *heard* Brian cooing about practically wanting to marry the Nexus 5? ;-)

    I think what actually happened here is that apparently Apple engineers listen to the AT podcast, because aside from 802.11ac and the screen size the 5S is designed almost perfectly to AT's well-known and often-stated specifications. It hits all of Anand's chip architecture geekery hot buttons in a way that Samsung's mashups of off-the-shelf parts never will, and they used Brian's exact line "Bigger pixels means better pictures" in the presentation. And naturally, if someone gives you what you want, you're likely to be happy with it. This is why people have Amazon gift lists ;-)

    Krait's 128 bit SIMD definitely helps, but it won't match true v8 architecture designs. I've written commercially shipping ARM assembly, and there's a *lot* of cruft in the older ISA that v8 cleans right up. And it lets compilers generate *much* more favorable code. I'll be surprised if the next Snapdragons aren't at least 32-bit v8. Qualcomm has been pretty forward-looking aside from their refusal to cooperate with the open-source community (Freedreno FTW).

    As far as 64 bit on less than 4 GB of RAM, it enables applications to more freely operate on files in NAND without taking up huge amounts of RAM (via mmap(), which the Linux kernel in Android of course also has). Apps like Loopy HD and MultiTrack DAW (not to mention Apple's own iMovie and GarageBand) will definitely be able to take advantage.
    Reply
  • darkich - Wednesday, September 18, 2013 - link

    Anand really should be ashamed of himself.. doing the same old Java Script(software dependant to a major extent) trick over and over again, and clearly as a day, refusing to include a hardware benchmark such as GB.
    That way, he can keep on kissing Intel's and Apple's behind.
    (Btw I actually love what Apple is doing with its custom ARM chips, and I am really looking forward to the A7X and iPad 5)

    Despicable dishonesty
    Reply
  • Dug - Wednesday, September 18, 2013 - link

    Funny, you want different information than what's provided and some how you come to the conclusion that it's biased. Yet throughout your post you have nothing to back it up, and all you have added is your own personal comments against Apple. Pot calling the kettle black? And from what I can tell from you blabbering on about 64bit code, you have no educated information on iOS 7 64bit or the 64bit proc. Correct me if I am wrong and show me an app that you have developed for it. In the end it comes down to how well the product will perform, and Anand's review has shown that. And what's this comment 'Native benchmarks don't compare the new apple chip to "old 32 bit v7 chips" - it only compares the new apple chip to the old ones. What 32bit v7 chip are you talking about? And why wouldn't he compare the new chip to the old ones. And what difference does it make? Reply
  • Patranus - Wednesday, September 18, 2013 - link

    What difference does it make if the iOS and Android use different JS engines? Reply
  • StevenRN - Saturday, September 21, 2013 - link

    By "unbiased" you mean a review from a site like AndroidCentral or similar site? Reply
  • barefeats - Wednesday, September 18, 2013 - link

    Excellent review. I'm sharing it with every iPhone freak I know. Reply
  • jimbob07734 - Thursday, September 19, 2013 - link

    I'm sure not gonna be that guy that buys the first Samsung 64 bit chip they slap together to match the A7. I doubt they have even started working on it until last week. Reply
  • NerdT - Monday, September 23, 2013 - link

    All of these graphics performance comparisions (except the off-screen ones) are incorrect and absolutly miss-leading. The reason is that most of the other phones have a 1080p display which has 2.8x higher resolution that iPhone 5s! That being said, all on-screen scores will get bumped up by about the same scale for iPhone because they are calculated based on FPS only, and the frames are render the the device resolution. This is a wrong benchmarking because you are not having an apple to apple comparision. I would have expected a much higher quality report from Anandtech! Please go ahead and correct your report and prevent miss-leading information. Reply
  • akdj - Friday, September 27, 2013 - link

    You're simply wrong. We'll leave it at that. This is objective data. You can't argue that. If you're unhappy with the results, build your own benchmarking app. Question---why would it NOT be relevant (on screen tests) when you're using said product? Is there a chance down the road you're going to upgrade your current 4" display to 4.3"? 5"? 6"---720p? 1080p? Easy answer=No. You can't upgrade your display. The onscreen tests are neither 'wrong', 'incorrect' or 'absolutely misleading'. They are numbers derived from current testing suites and software. You can't compare bananas and beans. They're different. So Android has increased the resolution of their displays in some cases??? Who cares! The numbers are STILL accurate. Again, each unique device is of it's own making. If Anand went forward with his site and your reasoning we wouldn't have any type of way to benchmark or gauge performance. Reply
  • jljaynes - Tuesday, September 17, 2013 - link

    Beast of a chip Reply
  • tredstone - Wednesday, September 18, 2013 - link

    i agree. and to think android fans have been criticizing the a7 and the 5s in general. this chip is amazing . wow, what a phone it is Reply
  • Gamercore - Tuesday, September 17, 2013 - link

    Hmmmm... thought I'd be more impressed by the camera. Figured that with all the hype, it might trump Android offerings but it only seems to match 'em. Will need to see more samples :)

    Oh, and great review as always :)
    Reply
  • dylan522p - Tuesday, September 17, 2013 - link

    Oscar? Really Anand? I seriously cannot stop laughing. What were you thinking? "I can’t find any other references to Oscar online, so that's what I'll call it." Brian probably was like "Are you high?" Ian probably though Anand after all these years ate some silicon beads hoping to become one with chips and went nuts. But seriously Anand, that is the worst and funniest name ever. Why not just Swift 2 or Swiftly or something conventional. Reply
  • blanarahul - Tuesday, September 17, 2013 - link

    You are thinking it wrong. Reply
  • dylan522p - Tuesday, September 17, 2013 - link

    How so? Reply
  • Anand Lal Shimpi - Tuesday, September 17, 2013 - link

    Er sorry, reload that page, I was missing an image that explains Oscar :) Reply
  • blanarahul - Tuesday, September 17, 2013 - link

    BTW Anand, which SoC does Brain's graph refer to? Reply
  • dylan522p - Tuesday, September 17, 2013 - link

    Ahhhh, Thanks. Still WTF was Apple thinking with that name? Reply
  • Anand Lal Shimpi - Tuesday, September 17, 2013 - link

    One more update - got the real name. Oscar is the CPU in M7, Cyclone is the new Swift. Reply
  • Ryan Smith - Tuesday, September 17, 2013 - link

    Swift
    Oscar
    Cyclone

    It has a nice pattern to it.
    Reply
  • blanarahul - Tuesday, September 17, 2013 - link

    Adreno 330 beats A7's GPU?! No wonder these chips are selling like hot cakes. Reply
  • dylan522p - Tuesday, September 17, 2013 - link

    Adreno has a larger thermal headroom. If Apple moves to 5" they would be able to scale performance much much higher. Reply
  • blanarahul - Tuesday, September 17, 2013 - link

    The key word is "if". I would be really happy if they do move to 5 inch.

    BTW, these chips = Snapdragon 800.
    Reply
  • tuxRoller - Wednesday, September 18, 2013 - link

    Yes, though the battery performance seems quite good for snapdragon 800. Reply
  • robbertbobbertson - Tuesday, September 17, 2013 - link

    So going off these theoretical numbers, the new iPhone 5S GPU is 4.36% as powerful as the one in the Playstation 4, and thats considered a weak GPU from the enthusiasts perspective. How is this is a "desktop" class chip. People thinking mobile will overtake everything are dreaming. Reply
  • dylan522p - Tuesday, September 17, 2013 - link

    You do realize that a Core Duo from years back is more than enough for most people on GPU and CPU, not the PS4 or some mid or high end chip now. Desktop class was simply marketing referring to 64-bit though. Reply
  • ScienceNOW - Tuesday, September 17, 2013 - link

    Are you kidding me? We are almost there. 4,36% is less than 5 doublings from 100% (PS4 GPU performance). In 5 years mobile GPU will be 40% MORE powerful than PS4. 8-9 years, and it'll equal todays' GTX 780 Reply
  • lowlymarine - Tuesday, September 17, 2013 - link

    Obligatory XKCD: http://xkcd.com/605/

    You're assuming that mobile GPU performance will continue to double each year indefinitely, which is patently absurd. There was a time each year's new desktop GPU doubled performance, too, but you reach a point where the laws of physics make that impractical.
    Reply
  • dylan522p - Tuesday, September 17, 2013 - link

    True, but mobile GPU has been doubling-trippling every year for the past like 6 years. It shows no indication of slowing down either. or dropping below doubling. No doubt though, the mobile industry will be above the PS4 performance before it's update. Reply
  • A5 - Tuesday, September 17, 2013 - link

    A lot of that has been fueled by catching up on process tech. That party is almost over - 20/22nm class parts are still a year+ away, and ~14nm parts for anyone that isn't Intel are even farther out. Reply
  • melgross - Tuesday, September 17, 2013 - link

    Well, if Intel's mobile line is now at 22nm, and Apple's is at 28nm, that a problem for Intel. With the A7 proving to be about equal to what Intel is producing, going by Anand's tests here, then Intel's tech isn't all that great.

    Indeed, Intel has always depended upon its better process fabrication, and being a generation ahead for its superior performance. It's not just chip design. So if Apple can catch up in performance being a half generation behind in node, then Apple's designs are superior to Intel's. and then, Intel had better watch out.
    Reply
  • teiglin - Wednesday, September 18, 2013 - link

    There is absolutely no basis to compare the process tech between A7 and Bay Trail. We know what battery life the A7 affords the iPhone 5s, but know nothing about what sort of battery life Silvermont might provide in a smartphone form factor. If those Oscar cores are really as power-efficient as Silvermont, then yes, that'd be amazing evidence of A7's power-efficiency. Reply
  • Wilco1 - Wednesday, September 18, 2013 - link

    Given a 2.4GHz Bay Trail in a development board already cannot keep up with the A7 at 1.3GHz (A7 beats it by a huge margin on Geekbench), there is no hope for BT-based phones. BT would need to be clocked far lower to fit in a total phone TDP of ~2W, which means it loses out even worse on performance against A7, Krait and Cortex-A15.

    So yes, the fact that Bay Trail is already beaten by a phone before it is even for sale is a sign of things to come. 2014 will be a hard year for Intel given 20nm TSMC will give further performance and power efficiency gains to their competitors. It all starts to sound a lot like a repeat of the old Atom...
    Reply
  • vcfan - Wednesday, September 18, 2013 - link

    "Apple's designs are superior to Intel's. and then, Intel had better watch out."

    first of all, its arm vs x86. and second, it was "LOLz intel cant do low power chips,arm wins", now its "but but intel is 22nm" . hilarious.
    Reply
  • ScienceNOW - Wednesday, September 18, 2013 - link

    WE have plenty of time until 5nm, by that time most likely something New will be in place to pick up where silicone left Reply
  • solipsism - Tuesday, September 17, 2013 - link

    Since when is a PS4 a desktop machine? And why only look at the GPU and not at the CPU that was clearly referenced? Reply
  • Crono - Tuesday, September 17, 2013 - link

    Ha, I love the low-light image choice of subject. :D
    You have to admit, it seems like Apple learned some things from Nokia and HTC this round to improve their cameras, though the combination dual flash is pretty ingenious. I'm wondering if the other manufacturers will adopt it or stick to single LED and Xenon flashes.
    Reply
  • StevoLincolnite - Tuesday, September 17, 2013 - link

    My Lumia has dual LED flashes, the Lumia 928 has dual Xenon flashes.
    So it's hardly anything new when the Lumia 920 has been on the shelf for almost a year.
    Reply
  • whyso - Tuesday, September 17, 2013 - link

    My HTC raider from 2011 has dual LED flash. Reply
  • Ewram - Wednesday, September 18, 2013 - link

    my HTC Desire HD (HTC Ace) has dual led, bought it november 2010. Reply
  • melgross - Wednesday, September 18, 2013 - link

    I think you missed the explanation of what Apple did here. The dual flashed in the past, and in yours and other current devices are just two flashes of exactly the same type. Apple's is one cold temp flash and one warm temp flash. The camera flashes before it takes the photo, then evaluates the picture quality based on color temperature. It then comes up with a combination of flash exposure that varies the amount each flash generates so as to give the proper color rendering, and well as the correct exposure.

    No other camera does that. Not even professional strobes can do that. I wonder if Apple patented the electronics and software used for the evaluation.
    Reply
  • solipsism - Wednesday, September 18, 2013 - link

    I hadn't seen such a concise explanation of this tech. Thanks. Reply
  • Dug - Wednesday, September 18, 2013 - link

    Very interesting and strange no one else has thought of doing this. I know using Nikon's system (CLS) exposure seems to be always right on, but depending on white balance setting it always needs adjustment. Reply
  • zshift - Tuesday, September 17, 2013 - link

    Awesome review! A great read, as always. Reply
  • Dman23 - Tuesday, September 17, 2013 - link

    Another awesome review by Anand Lal Shimpi! Informative and on point... love it. Reply
  • Crono - Tuesday, September 17, 2013 - link

    I want to point out for the sake of consistency that Apple's naming convention is to use lowercase letters after the number for the iPhones. So it's "4s", not "4S", at least according to their website. It's annoying, I know, just pointing it out because it confused me at first so I looked it up. Reply
  • dylan522p - Tuesday, September 17, 2013 - link

    Yeah, Brian went on a twitter rant of sorts about it. Pretty small but annoying change. Reply
  • tipoo - Tuesday, September 17, 2013 - link

    Am I wrong, or is this the highest IPC smartphone core out there? 1.3 GHz, dual core, and it still doubles the performance of the A6 landing it into the territory most phones take higher clocked quads to get into. Reply
  • doobydoo - Wednesday, September 18, 2013 - link

    Landing it way beyond all the current quad core phones. Reply
  • UpSpin - Wednesday, September 18, 2013 - link

    Browserbenchmarks != CPU performance
    So it's possible that the good Browser benchmarks result more in a better optimized JS engine built for the A7? (because Samsung, NVidia, MS, ATI, Intel, ... all cheat in benchmarks, it's even possible that Apple still uses the not further optimized JS and Browser engine for the 5C, to increase the gap between A6 and A7 and make people believe the better browser scores are because of the A7, instead it's just heavy software optimization?)

    Isn't it odd that a RAW CPU benchmark (3D Mark unlimited - Physics) positions the iPhone 5S on the same step as 5C and 5? (assuming there is no bug). 3D Mark also probably doesn't take advantage of 64-bit, but so will no near future App either, especially, because of the limited RAM, Games won't take 64-bit in the future either, else they'll blow the RAM limit more easily.
    Reply
  • ScienceNOW - Tuesday, September 17, 2013 - link

    F'ing thing sucks! I can all write it and we do it live! we do it live! Reply
  • Mondozai - Wednesday, September 18, 2013 - link

    Get off your crack pipe, boy. Reply
  • ScienceNOW - Wednesday, September 18, 2013 - link

    You, and your narrow frame of reference.. Reply
  • theCuriousTask - Tuesday, September 17, 2013 - link

    Where the previous iphones tested with iOS 7? Reply
  • dylan522p - Tuesday, September 17, 2013 - link

    Yes, it says in the review. Reply
  • NotaFanBoy - Wednesday, September 25, 2013 - link

    My iPhone4 seems choking with the iOS7, my friend told me, it should be run on at least a 4s Reply
  • DanNeely - Tuesday, September 17, 2013 - link

    You appear to be missing a table on the GPU page:

    "The A7’s GPU Configuration: PowerVR G6430

    Previously known by the codename Rogue, series 6 has been announced in the following configurations:

    Based on the delivered performance, as well as some other products coming down the pipeline I believe Apple’s A7 features a variant of the PowerVR G6430 - a 4 cluster Rogue design optimized for performance (vs. area)."
    Reply
  • A5 - Tuesday, September 17, 2013 - link

    Noticed this as well. Reply
  • Anand Lal Shimpi - Tuesday, September 17, 2013 - link

    Fixed :) Reply
  • tipoo - Tuesday, September 17, 2013 - link

    Is the RAM use of these 64 bit apps higher than the 32 bit ones running on the 5? On x86 at least, moving to 64 bit pointers usually bloats your program about 25% Reply
  • dylan522p - Tuesday, September 17, 2013 - link

    Very few applications on x86 actually use 64bit though. Almost none of the consumer applications that are not video editing and such related do so. Reply
  • ClarkGoble - Wednesday, September 18, 2013 - link

    On OSX most apps are 64 bit. Developers I've talked with say you get a 20%-30% speed increase by going 64 bit. Oddly Apple's iWork apps are among the few on my system still 32bit. (And that'll probably change next month) With regards to iOS7 I worry that they didn't increase the RAM but will, for multiprocessing tasks, be having to load both 32bit and 64bit frameworks in RAM at the same time. I assume they have a way to do this well but extra memory would have made it less painful (although perhaps have hurt the battery life) Reply
  • DeciusStrabo - Wednesday, September 18, 2013 - link

    Now, now, that's not really true any more. Taking my Windows 8 machine her, about 2/3 of the programs and background processes currently running are 64bit, 1/3 32bit. On MacOS it is more like 90 % 64bit, 10 % 32bit. Reply
  • name99 - Thursday, September 19, 2013 - link

    You would get more useful answers if you asked decent questions. What does "bloat your program by 25" mean?
    - 25% larger CODE footprint?
    - 25% larger ACTIVE CODE footprint?
    - 25% larger DATA footprint?
    - 25% larger ACTIVE DATA footprint?
    - 25% larger shipped binary?
    The last (shipped binary) is what most people seem to mean when they talk about bloat. It's also the one for which the claim is closest to bullshit because most of what takes up space in a binary is data assets --- images, translated strings, that sort of thing. Even duplicating the code resources to include both 64 and 32 bit code will, for most commercial apps, add only negligible size to the shipping binary.
    Reply
  • Devfarce - Tuesday, September 17, 2013 - link

    The performance of the A7 chip sounds amazing. Similar performance to the original 11" MBA is pretty incredible. Makes me realize that I have a 2007 Merom 1.8 GHz Core 2 Duo in my laptop, that it's running Win7 32 bit (again!!!!) and that is within striking distance of the iPhone 5s. I don't even want to think about GPU or memory performance, I'm sure that ship sailed long ago with GMA X3100. Reply
  • tipoo - Tuesday, September 17, 2013 - link

    Closing in on or maybe surpassing Intel HD2500 now at least, I think. HD4000 is still a bit away, probably within striking range of A7X. Reply
  • dylan522p - Tuesday, September 17, 2013 - link

    Hopefully HD6000 is really good. They are doing a big design change then. Reply
  • Krysto - Wednesday, September 18, 2013 - link

    Intel will be focusing mostly on power consumption from now on, not performance, even on the GPU side. Although I'm sure they'll try to be misleading again, by showing off the "high-end PC version" of their new GPU, to make everyone think that's what they're getting in their laptops (even though they're not), just like they did with Haswell. Reply
  • Mondozai - Wednesday, September 18, 2013 - link

    You have no clue, Krysto. Reply
  • Devfarce - Wednesday, September 18, 2013 - link

    I wouldn't say Intel is misleading on performance, however very few companies will demand the parts with the biggest GPU like Apple does. People just don't demand the parts with the big GPUs although they should. Which is why Intel currently sells mostly HD4400 in the windows Haswell chips on the market.

    But back to the iPhone, this is truly incredible even if people don't want to believe it.
    Reply
  • akdj - Thursday, September 19, 2013 - link

    Not sure you know what you're talking about. The 5000 & 51(2?)00 iGPUs are incredible. Especially when you take in to count the efficiency and power increase between its (Haswell) architecture in comparison with the HD4000 in Ivy Bridge. I think Apple's demand here is a big motivation for Intel to continue to innovate with their iGPUs...regardless of what the other 'ultra book' OEMs are demanding. They just don't have the pull...or the 'balls' to stand up to Intel. I also think Intel has impressed themselves with the performance gains from the Hd3000--->40000--->>4600/5&5100 transitions. As they progress and shut the gap of what a normal consumer that enjoys gaming and video editing (not the GPU guru that's demanding the latest SLI nVidia setup)...when directly compared with discrete cards, they'll enjoy a big win. Already the ultra book sales are being subsidized by Intel...to the tune of $300,000,000. I think they're motivated and Apple absolutely IS using the high power GPUs. Not the 4600 all others have chosen. The 5000s are already in the new MBA. The rMBP refresh is close and my bet is they'll be using the high end iGPU in the 13/15" rMBP updates. Hopefully still maintaining the discreet option on the 15"...but as the performance increase, in the portable laptop sector....I'm not so sure most consumers wouldn't value all day battery performance vs an extra 10fps in the latest FPS;). The 13" MBA is already getting 10-12 hours of battery life on Haswell with the HD 5000. And able to play triple A games at decent frame rates, albeit not on the 'ultimate' settings with anti aliasing. For those interested, they'll augment their day long use laptop with a gaming console. I think the whole big beige desktop's days are limited. We'll see. While I don't disagree Intel tends to embellish their performance...in this case, they're going the right direction. Too much competition...including from the ultra low voltage SoC developers making such massive in roads (this review is all the proof you need). Reply
  • ESC2000 - Saturday, September 21, 2013 - link

    So inexplicably the air has this great GPU coupled with a crappy screen that limits the GPU's impact. Then on the windows ultrabooks we have generally better screens (at least on the ones that have similar cost to the air) but less good GPUs. Reply
  • akdj - Tuesday, October 08, 2013 - link

    Touché. I'll give ya that one. While I do love my MBA for travel, the screens suck! Point was not to compare displays----rather iGPU technology movement Reply
  • vcfan - Wednesday, September 18, 2013 - link

    I doubt the A7X is even close to 300GFLOPS Reply
  • tipoo - Wednesday, September 18, 2013 - link

    78 times 2. It'll probably be close to 160 Gflops, if historical A*X series performance doubling is to be followed. Reply
  • DeciusStrabo - Wednesday, September 18, 2013 - link

    Admittedly the original 11" MBA was a dog in terms of performance when it was released. Still 1/3-1/2 (depending on the benchmark) of the speed of my i3-3220 Home Server isn't exactly bad for a phone. Reply
  • jeffkibuule - Wednesday, September 18, 2013 - link

    Didn't the original 11" MBA have overheating problems? That thing was a toaster. Reply
  • coolhardware - Tuesday, September 17, 2013 - link

    Very nice review as always Anand :-)

    I especially enjoyed checking out the display closeups and charts. As you mention, Apple's display size and pixel density are both falling behind flagship phones from other manufacturers, this list http://pixensity.com/list/phone/ puts it well outside of the top 10.

    However, Apple displays are generally pretty nice to look at. I cannot help but wonder if Apple is going to get any major display breakthroughs in future generations, or if other manufacturers (Samsung, LG, or other) will be able to surpass it... anyhoo, thanks for the nice review!

    PS if anyone has subjective opinions after viewing the 5s' display versus other flagships, I would appreciate hearing about it!
    Reply
  • Impulses - Tuesday, September 17, 2013 - link

    It's not like Apple's designing their display in Cupertino... I'm sure they have a large degree of control in order to get the specs they need and the calibration etc, but if there's any sorta revelation in display tech it'd come from the labs of the people that actually make them (Samsung, LG, Sharp, etc). Reply
  • Impulses - Tuesday, September 17, 2013 - link

    Oh and my subjective opinion on current flagship displays is that I'd still take an IPS/super LCD/whatever over the typical over saturated AMOLED, the Moto X made a strong case for using AMOLED for it's active notifications but ehh... I like LG's tap to wake gimmick on their recent models, and after owning two 4.3" devices and a 4.6-4.7" (all with capacitive buttons) I'm actually yearning for something a bit more compact. I think the Moto X is a near perfect fit, although it does sacrifice some screen space for the buttons... I thought on screen buttons were pointless at first because none of the early devices that used them had less bezek space nor where they smaller, but we're finally seeing that become a reality with the X and the G2 so I think I'm on board now. Die bezels die! (along with menu etc, HTC dumping the multi task button bothered me tho, one of their few design missteps IMO) Reply
  • coolhardware - Tuesday, September 17, 2013 - link

    Thank you Impulses for the nice analysis.

    I too really like the Moto X. It seems like it hits the sweet spot in size/capability plus it is actually available on my carrier (US Cellular). I wish that all carriers got the Moto X customization options though because I find that to be a pretty cool feature:-)

    I will eventually be replacing my Galaxy Note 2 with giant extended battery:
    http://goo.gl/TxUnez
    with something and right now it is a toss up between the upcoming Galaxy Note 3 and the Moto X. I really dislike how sluggish my Note 2 can be and I mainly blame that on Samsung's skin on top of Android :-( That is a big plus for the Moto X since I hear it is pretty clean in that respect.

    My new choice may be Moto X + Nexus 7 FHD rather than Note 3.

    Any other thoughts/opinions on the topic?
    Reply
  • akdj - Thursday, September 19, 2013 - link

    I'm with ya on the note. I bought the original GNote and the contract can't end quick enough. It's a dog! Slow as molasses and my wife and each own one as our 'business' phones. Made sense, our clients can sign with the stylus their credit card authorization. We use the Square system for CCs and we won't be buying the Notes again. I think you're right. TouchWiz is a killer Reply
  • coolhardware - Tuesday, September 17, 2013 - link

    Very true, however, it is my understandingthat sometimes Apple can use their volume to (A) get things a bit before everyone else (like when Apple gets Intel CPUs before others) or (B) get something special added/tweaked/improved on an existing component (batteries, displays, materials).

    Sorry to not have more definitive references/examples for (A) & (B) but here's a recent illustration:
    http://www.macrumors.com/2013/07/26/intel-to-suppl...
    How much this really happens I do not know, but I imagine suppliers want to keep Apple happy :-)
    Reply
  • akdj - Tuesday, October 08, 2013 - link

    They sold more iOS devices last year (200,000,000) than vehicles sold in the world. They're still the world's number one selling 'phone'. Samsung makes a dozen...maybe two? Their flagships tend to sell well (wasn't the S3 close to 30mil @ some point?)---but no where near the iPhone specific sales figures....when you're dealing in that quantity--ya betcha....you'll have access, pricing and typically 'pick of the litter' Reply
  • melgross - Wednesday, September 18, 2013 - link

    Display density is now nothing more than a marketing tool. It no longer serves any purpose. Displays with ppi's of over 350 don't give us any apparent extra sharpness, as we can't see it. The Galaxy S4 has a much higher Rez display, but it still uses PenTile, so that extra Rez is only allowing the screen to look as good as a lower Rez display. I'm wondering what Apple will do with the iPad Retina. If they do what they've been doing, then the display will have four times the number of pixels, and will be one of, if not the highest ppi displays on the market. They do that to make it easy for developers, but it's obviously unnecessary. No one has ever been able to see the pixels on my 326 ppi iphone display. in fact, no one has ever seen the pixels on my 266 ppi ipad Retina display. Hopefully we'll find out in a month. Reply
  • melgross - Wednesday, September 18, 2013 - link

    Oops! I meant what will they be doing with the iPad Mini Retina display of course. Reply
  • ESC2000 - Saturday, September 21, 2013 - link

    There isn't one definitive cutoff above which extra pixels are useless since people hold their phones different distances from their faces and people have varying eyesight. 'Retina' is pure marketing - first apple used it to emphasize how great their high rez (for the time) screens were (and they did look a lot better than 15" 1366x768 screens) and now they're using it to disguise the fact that this is the same low rez (for the time) screen that they've had on the iPhone 5 for a year.

    I don't even have good eyesight but even I can see that the LG G2 screen (441 PPI) is better than my nexus 7 2013 screen (323 PPI) which is better than the iPad 3 screen (264 PPI - I don't have the 4 for comparison) which is miles better than the iPad mini screen (163 PPI). Personally I'd slot the iPhone 5 after the nexus 7 2013 on that list even though the PPI are about the same. Obviously other factors, often subjective, affect our preferences. I find most apple screens washed out but many people feel they are the only true color reproductions.

    Regardless the random arbitrary cutoffs beyond which extra PPI supposedly makes no difference is a copout.
    Reply
  • tuxRoller - Tuesday, September 17, 2013 - link

    Great review. I wish we could see this for the other architectures/socs.
    If want to see the code for the benchmarks (and you should) there are plenty of oss suites you can choose from. You could use linaro's if you want, but for the stream benchmark you could grab http://www.cs.virginia.edu/stream/FTP/Code/ and compile it on xcode.
    Reply
  • abrowne1993 - Tuesday, September 17, 2013 - link

    Not a single Lumia in the camera comparison? Why? The people who are really concerned about their smartphone's camera performance will put up with the WP8 platform's downsides. Reply
  • A5 - Tuesday, September 17, 2013 - link

    I would guess Anand doesn't have a 1020 handy to compare with. Probably have to wait for Brian on that one. Reply
  • Anand Lal Shimpi - Wednesday, September 18, 2013 - link

    This. I only compared to what I had on hand. Reply
  • helloworldv2 - Wednesday, September 18, 2013 - link

    A good review would compare to the best in class, i.e. the Lumia 1020. Of course it would wipe the floor with the 5S, so that wouldn't be very good for Anandtech if they want to maintain good relations with Apple and all that.. Reply
  • darkcrayon - Wednesday, September 18, 2013 - link

    I think Apple is satisfied with their performance in the 5s form factor and understands it's a reasonable compromise vs sticking what looks like a section of an actual camera to the back of the phone, making an awkward 1/2 inch lump. And of course as an overall device the 5s it's much more advanced in other ways. Reply
  • Gridlock - Wednesday, September 18, 2013 - link

    So Apple should have sent Anand a 1020?

    Or maybe Nokia PR should be slightly more awake.

    120fps and 1024 available flash tones beats a Quasimodo Nokia for me.
    Reply
  • Fleeb - Wednesday, September 18, 2013 - link

    But in photography, in camera flash should only be used as a last resort. Reply
  • Petrodactyl - Monday, September 23, 2013 - link

    if you're using remote flash with your phone camera, you're an idiot - and likely a bad photographer, to boot. Please try to stay within the realm of reality. Thanks. Reply
  • akdj - Wednesday, October 09, 2013 - link

    It was an excellent review---and not just 'based' on photographic prowess. Is there a blog you've got going that provides 'good reviews'? I'm honestly interested because I found this one incredibly well written...and even responded, by the admin of the site---directly to you. He Didn't Have Your Beloved 1020. That said---plenty of comparison reviews if ALL you're interested in is the photographic abilities of the 5s. (There's a WHOLE lot more 'most' folks are seeking from their chosen phone). That said---Apple has always, for the time, provided top notch---maybe not always #1---but easily and consistently in the top 5 performers (including older models while a new one is released)...not to mention, the popularity to both developers and photo share sites speaks volumes to it's ubiquity. DPReview.com has an excellent and specific write up JUST for your helloworldv2 on the abilities (and downsides) to the iPhone 5s. Seems like an 8 or 10 page write up with plenty of comments for your to participate in as well. Seems like a better idea than coming in to a (possibly the most detailed on the net as well as insightful) site and bitching about one of the MANY functions of your 'pocket computer' review. Only so much time that can be set up to review each subject...and a finite amount of product---I'm sure in the lab hanging around, as well as the public choice...again ubiquitous---to choose Apple or Android en masse vs. Windows handsets at this point. If photography is your 'thing' (I shoot professionally, BTW)---grab a nice point and shoot. The Canon S110 or the new Sony RX100v2 are incredible performers....then you can own a decent phone too and not have to compromise!
    J
    Reply
  • abrowne1993 - Wednesday, September 18, 2013 - link

    Fair enough. I hope Brian gets the chance to do a comparison on here. Reply
  • bootareen - Tuesday, September 17, 2013 - link

    There definitely is a display lottery. I've gone through around 7 iPhone 5's with different problems, but all had an interlace/scan line issue which is exactly what it sounds like. Even if you are further away from the screen and can't see the scan lines per se, the screen is noticeably less comfortable to look at and focus on with your eyes compared to a normal screen.

    Have you heard of this Anand, and are you aware of what would cause this issue?
    Reply
  • fokka - Tuesday, September 17, 2013 - link

    i can't really wrap my head around how the iphone can compete with high end android phones so well, even beating them by considerable margins in many benchmarks, although "only using a dual core" which is probably not even clocked as high as, say, a snapdragon 800?

    apple has put an emphasis on gpu-performance for a long time now, but seeing them on top so often and combining that with good battery life, all while using a miniscule battery (by android's standards) i have to say they are doing an astonishing job.

    too bad i don't like apple software (and pricing).
    Reply
  • Impulses - Tuesday, September 17, 2013 - link

    The Moto X competes well with all the current quad cores too, it's not that big of a rarity... The fact that they can optimize for battery life better isn't that shocking either, it's the same deal as OS X... When you're only testing against a dozen models or so versus thousands you can do a lot more in this regard. Reply
  • flyingpants1 - Wednesday, September 18, 2013 - link

    Sorry, no. As much as I hate Apple products.. Apple is killing it with amazing battery life, some of the best IPC, possibly best thermals (and therefore lowest profile) and the BEST performance, all at the same time.

    Whatever you said about testing multiple phones doesn't matter. It doesn't change facts.
    Reply
  • melgross - Wednesday, September 18, 2013 - link

    The Moto X hardly competes at all. What he said is true. We're talking an SoC that runs at a lower speed, often considerably slower. What Apple comes out with new iPads in October, they will increase clock speed by 15% or so, and often with more GPU cores as well, usually by 50%. We'll see what the chip can really do then. To be compared to tablets, and come ahead is pretty damning to other processor manufacturers. Reply
  • jeffkibuule - Wednesday, September 18, 2013 - link

    When you design both hardware and software, you get optimize far more that when an OEM gets code from Google and *attempts* to get it working best on their hardware. And lets be clear, Android OEMs are clearly throwing more hardware at a software problem, when even a Windows phone with a 1.5 year old SoC in the Lumia 1020 feels far smoother and fluid that the latest Android phone. Reply
  • ananduser - Wednesday, September 18, 2013 - link

    Simple...in the current smartphone game of leapfrog, the last one to release its flagship is the top dog. This time the A7 sports a brand new, benchmark "friendly", ARM designed ISA. Next time Exynos will have the upper hand, or Qualcomm, or nVidia.

    The GPU is made by Imagination not by Apple. The other manufacturers are trying to push their own solution rather than making due with a 3rd party. Technically any other manufacturer could pay Imagination for their Rogue chipset.

    Good for you, really.
    Reply
  • akdj - Wednesday, October 09, 2013 - link

    "This time the A7 sports a brand new, benchmark "friendly", ARM designed ISA. Next time Exynos will have the upper hand, or Qualcomm, or nVidia."
    You're right---of course, as technology progresses----but the funny thing is, even the 'year old' iPhone 5 holds it's own against even the latest flagship Android devices when it comes down to graphics and browsing/energy (battery life). Apple's NOT just buying off the shelf parts from Imagination----they've hired many a chip expert/designer in house and are now not only optimizing their S/W code to the chip---they're also designing the chips architecture to their system with Imagination and in house skill sets. No other company is doing this yet...and with the new A8 instruction set---this is the best I've seen in 's' updates since the release of the 3GS.
    It's a strong move by Apple, these past few years---to apply their own instruction and low level programming as well as SoC design, match up to iOS...the software integrate(ability) with the hardware....and perhaps most importantly, their relationship with carriers (good or terrible, it's irrelevant) as they're huge sellers and contract 'getters' --- and the inability for the carriers to add their own skin has been brilliant. That, to me...and IMHO was one of the great 'feats' Steve Jobs pulled off. Obviously it only initially worked with AT&T----but Verizon almost immediately was beggin to get on board and now they've not only managed to penetrate small mom n pop services in the US---but open themselves up in China, India and Japan to some of the largest carriers (and population served) in the world.
    To me, this IS the route Google should take---and reign in both the OEMs and the carriers, but why would they care? They're not hardware makers (for profit anyways)---they're miners, advertiser first aid kits----Data Miners. Your Information is their Money. Period.
    Regardless of who makes the SoC, as long as their continues to be bloated Java skins on top of Android, the 'experience' with Apple will be 'better' when it comes to fluency, updates, app selection and app development (Money Paid to developers), et al.
    J
    Reply
  • Miserphillips - Tuesday, September 17, 2013 - link

    iOS 7 very heavily copied Windows Phone. Reply
  • melgross - Wednesday, September 18, 2013 - link

    It's nothing at all like that laggard—thankfully. Reply
  • nephipower - Tuesday, September 17, 2013 - link

    Can you share how much more space is used on the device because the native apps are now 64 bit? Reply
  • solipsism - Tuesday, September 17, 2013 - link

    Like all additional binaries it's negligible. The times you'll notice extra space being used is when you need additional resources, like images for 1x and 2x display (i.e.: Retina) or making a Universal app that needs to support both iPhone and iPad UIs. Reply
  • tech01x - Wednesday, September 18, 2013 - link

    Actually, no extra space for images. It's the pointers that are double in size, so depending on the application, there will be additional RAM usage. Reply
  • solipsism - Wednesday, September 18, 2013 - link

    There is definitely additional space required for both 2x images and iPad 1x and 2x images when making an app Universal. Reply
  • akdj - Wednesday, October 09, 2013 - link

    I think Apple is NOW mandating retina assets only, isn't that something that came up with the iOS7/XCode 5 updates? (Essentially ensuring an iPad mini 'retina' and the gradual phasing out of the first two iPads....or, is it possible they can 'halve' the resolution just as they do the 2x for iPhone apps on iPads?) Reply
  • tipoo - Tuesday, September 17, 2013 - link

    When you say mobile Core 2 class performance, does that mean Core 2 ULVs like in the Macbook Air a few generations back, or Core 2 proper? I can't find any comparisons directly from Bay Trail to Core 2. Reply
  • Krysto - Wednesday, September 18, 2013 - link

    Since he said MBA, then ULV. Reply
  • Anand Lal Shimpi - Wednesday, September 18, 2013 - link

    Core 2 ULVs Reply
  • tipoo - Tuesday, September 17, 2013 - link

    Is there supposed to be a 5S in the Basemark X off screen?

    http://images.anandtech.com/graphs/graph7335/58164...
    Reply
  • dylan522p - Tuesday, September 17, 2013 - link

    He mentioned he ran into a bug. Reply
  • tipoo - Tuesday, September 17, 2013 - link

    is the RAM use of these 64 bit apps higher than the 32 bit ones running on the 5? On x86 at least, moving to 64 bit pointers usually bloats your program about 25% Reply
  • Ryan Smith - Tuesday, September 17, 2013 - link

    Yes, RAM usage will be higher to some degree. Apple's own 64-bit guide makes brief mention of it, noting that it needs to be suitably managed to avoid a performance regression.

    https://developer.apple.com/library/prerelease/ios...
    Reply
  • Eug - Wednesday, September 18, 2013 - link

    I knew this 5S wasn't "just" another S iteration. I figured this machine was going to be a beast in terms of performance, and the improved camera, as well as the fingerprint scanner make this purchase a no-brainer for me.

    Except for one thing...

    Given the potentially somewhat increased memory usage going to 64-bit, I was disappointed that this machine didn't get 2 GB RAM. 64-bit future-proofs the iPhone 5S such that it is IMHO likely to last one generation longer for iOS updates vs. the iPhone 5/5C. The problem though is by that time (2016?) the 1 GB RAM would likely be pretty limiting.

    But then I wonder if even in 2013-2014 it could be somewhat limiting too, even just compared to the 5/5C which would have apps with smaller memory footprints using that same 1 GB amount of RAM.
    Reply
  • melgross - Wednesday, September 18, 2013 - link

    Considering the differences in the multitasking implementations, 1GB for iOS is closer to 2GB for Android. It's not as much of an issue. Reply
  • DarkXale - Wednesday, September 18, 2013 - link

    Considering iOS's tendency to reset everything that isn't in the foreground there is plenty of motivation for increasing RAM. As is, I cannot depend on iOS to remember what I was doing in an app as soon as I switch out of it. Even if its for just 2 seconds.

    On the iPad 4 you basically can't have multiple tabs open in Safari. If you browse or scroll, write anything, or trigger any javascript in them - it'll be undone once you move to another tab. That is, quite frankly, awful user experience. And thats purely down to RAM shortage.
    Reply
  • Dug - Wednesday, September 18, 2013 - link

    Try a different browser. Others don't do what you are explaining so I don't think it's a ram shortage. Reply
  • jeffkibuule - Wednesday, September 18, 2013 - link

    If I remember correctly, you need more RAM when running a virtual machine (then again, Windows Phone has a CLR and most of their phones have 1GB or less...).

    I honestly think it's because Android apps are allowed to run in the background without a care in the world, whereas on iOS, you must be performing a specific task the API allows to get that time. And if you are using too much memory, you get axed.
    Reply
  • steven75 - Thursday, September 19, 2013 - link

    True on iOS 6, but no longer true on iOS 7. Reply
  • danbob999 - Wednesday, September 18, 2013 - link

    I would have prefered a 32 bit CPU with more RAM. Reply
  • KPOM - Tuesday, September 17, 2013 - link

    Nice thorough review as always, Anand. Reply
  • juicytuna - Tuesday, September 17, 2013 - link

    That CPU is amazing. Higher IPC than A15 and without spending die area on implementing a complicated big.LITTLE scheme to keep power down. Have ARM and Qualcomm engineers been sleeping all this time? Their efforts look to be hopelessly outclassed by Apple. This is a big a feat as when intel dropped Conroe on the world. Reply
  • ddriver - Wednesday, September 18, 2013 - link

    No they haven't been sleeping, they just know when enough is enough. What kind of applications are you running on a mobile phone? 3D MAX? MATLAB? ANSYS? Or are you just using it to jerk your vanity around facebook and take duckface photos in front of the bathroom mirror? I know most people are :) Surely, extra performance never hurts, and will likely improve battery life a bit, but knowing how much is "enough" never hurts on its own.

    ARM v8 transition is scheduled for all high-end products, apple just did it before it was optimal to impress its fanatical devotees and reinforce their blind belief in the brand. Just like it used exclusive deals with hardware vendors to bring new tech a little earlier than other manufacturers to give itself an "edge".

    The funny part is that if performance intensive applications were even available for mobile phones, the 5S will run into a brickwall with its single gigabyte of ram, because real-world workloads are nothing like those limited footprint benchmarks used in this review. I suspect the note 3 will actually score better in a heavy real-world application despite its slower CPU, because the moment the 5s runs out of memory swapping begins, and that speedy CPU will be brought to its knees because of the terrible storage bandwidth. Luckily, it is very unlikely to see any such applications for mobile phones, at least in the general public, in-house custom implementations are another matter.
    Reply
  • dugbug - Wednesday, September 18, 2013 - link

    There is a review above that describes the benefits empirically. You may want to read it.

    Also, ARM was surely not asleep at the wheel, they created the outstanding V8 architecture and I am sure were very in the loop with apple. Qualcomm on the other hand are in a quandary as they need to wait for an OS to be ready for this. Windows phone perhaps?
    Reply
  • stacey94 - Wednesday, September 18, 2013 - link

    I'm sure Qualcomm and Google are in contact and work together on these things. Google seems have have moved almost exclusively to Qcom SoCs on their devices. If KitKat brings 64-bit and ARMv8 support, I'm sure Qualcomm knows about it and the next gen Snapdragons will take advantage of it. Reply
  • steven75 - Thursday, September 19, 2013 - link

    This dude's got some SERIOUS Apple-envy.

    Btw, GarageBand, iPhoto, and iMovie will love the CPU headroom.
    Reply
  • Eug - Wednesday, September 18, 2013 - link

    Excellent review, Anand, as always. I too am getting an iPhone 5C, but re: iPhone 4 with iOS 7. I will say I was very pleasantly surprised by the performance of iOS 7.0 on the iPhone 4. I think it's very usable for UI navigation most of the time. There are occasional lags, but they were also there in iOS 6, albeit slightly less often in iOS 6. Where it really feels more consistently slow is internet surfing and the like. Overall though, my wife considers the iPhone 4 with iOS 7 to be reasonably speedy, because she does not do a lot of internet browsing on the phone, or gaming.

    Actually, a few things are actually a bit faster on the iPhone 4 in iOS 7 than iOS 6.1.3. For example, SunSpider 1.0.1 is about 2725 in iOS 7, but about 2975 in iOS 6. That's almost a 10% improvement.

    So, while I personally would recommend nothing less than a 4S if getting a new phone for iOS 7, and preferably a 5C, for those existing users with the iPhone 4, don't throw it away just yet, because you might be surprised just how reasonable it is, especially if you are happy with it in iOS 6.
    Reply
  • Eug - Wednesday, September 18, 2013 - link

    Ooops. I meant I am getting a 5S. We geeks "need" the 5S, but slower iDevices are still quite usable. It's amazing just how much Apple has been able optimize iOS 7 for such ancient hardware. Reply
  • notddriver - Thursday, September 19, 2013 - link

    I love that we live in a world where 2-3 years old counts as seriously ancient. Reply
  • ltcommanderdata - Wednesday, September 18, 2013 - link

    Do you have any data on NAND speed improvements? ChAIR said level asset loading was 5x faster in the iPhone 5s during their Infinity Blade III demo. Is that just CPU/GPU/RAM based or also due to NAND speed? Faster NAND could also be contributing to the good photo burst mode performance and 720p120 video support. Reply
  • jeffkibuule - Wednesday, September 18, 2013 - link

    I'm still wondering when Apple will be taking advantage of their Anobit acquisition. I'd like to see some real NAND controllers in iOS devices if they don't suck up tons of power. Reply
  • tipoo - Wednesday, September 18, 2013 - link

    Yeah, I was curious about that, how did it get 5x faster? That seems too huge a jump for one NAND generation? Reply
  • pankajparikh - Wednesday, September 18, 2013 - link

    Hi Anand big fan...can you confirm if the 5S supports LTE in India? Reply
  • rchangek - Wednesday, September 18, 2013 - link

    You can get Hong Kong model A1530 that supports 39, 39, 40.
    These will support LTE bands 38, 39, 40 and I think Airtel is on band 40.
    There are reports of (yet) unmentioned models for China A1516 and A1518 that will support bands 38, 39, 40 for LTE. However, it would be interesting to see if they will support UMTS alongside TD-SCDMA for 3G.
    http://reviews.cnet.com/8301-6452_7-57602366/unann...
    Reply
  • rchangek - Wednesday, September 18, 2013 - link

    They are releasing A1528 in China for China Unicom and I find it weird that it is not listed anywhere in the LTE support documentation. Reply
  • jasonelmore - Wednesday, September 18, 2013 - link

    Great review, 1st on the web to my knowledge. However there are a few points i want to note.

    1: Brian is your mobile guy, He knows a lot more than anand on this front. When i saw anand did this review, i cant help but think some fanboyism is taking place here, which could hinder credibility.

    2: You say the iphone 5S GPU is more powerful than the GPU in the iPad 4 but looking at the charts, its on par/same as the ipad 4. A achivement none-the-less, but the results are well within the margin of error (1fps)
    Reply
  • doobydoo - Wednesday, September 18, 2013 - link

    I don't think there are any question marks over Anands credibility and it's a little silly to make out that there are. Reply
  • jasonelmore - Wednesday, September 18, 2013 - link

    i'm just sayin, it makes me wonder if he's being 100% objective. and if i wonder it, then others will to. Bryan is a mobile author who reviews every mobile device to come through the labs, but when apple phones come out, suddenly anand takes the reigns from him. It's not like there are a ton of different phones out right now and they needed to spread the workload, anand could've at least let bryan do the 5C review, but nah, he wanted both. Reply
  • Mondozai - Wednesday, September 18, 2013 - link

    I remember Anand saying that Brian would get the iPhone 5s review, but apparently that changed.

    In my opinion, this is for the better. Anand knows more about CPU's and GPU's than Brian and the most significant change with the iPhone 5s was the A7 chip, together with the camera.

    And, as you'll recall, Brian did a piece already on the 5s camera and cameras are sort of his speciality on smartphones, together with wireless.

    Secondly, and I'm just speculating here, Brian is known as more of an Android guy. Could it be that he is already working on another review, like the Z1 or the Note 3? He can't do them all.
    Reply
  • dugbug - Wednesday, September 18, 2013 - link

    That changed IMHO with the 64-bit CPU. Not sure why anand doing the review matters to you, but the guy is amazing with CPU and GPU analysis. Let Bryan do follow-up analysis. Reply
  • repoman27 - Wednesday, September 18, 2013 - link

    The embargo was lifted only one week after the reviewers got their hands on the devices, that's why Anand did these phones and not Brian. Regardless, Brian would have likely needed Anand to contribute on the A7 analysis, which was a crucial component of this review, and we would all still be waiting for a thesis from Brian. Reply
  • susan_wong - Wednesday, September 18, 2013 - link

    For the AT Smartphone Bench battery life test:

    It would be great to know how many total web pages were loaded for each of the 4S, 5 and 5S before the battery died.
    Reply
  • Krysto - Wednesday, September 18, 2013 - link

    Seems like Apple got to 64-bit first only because they didn't redesign the CPU core, too, like everyone else is doing. They rebuilt Swift on top of ARMv8 and just tweaked it. Reply
  • ltcommanderdata - Wednesday, September 18, 2013 - link

    http://www.anandtech.com/show/6420/arms-cortex-a57...

    "Architecturally, the Cortex A57 is much like a tweaked Cortex A15 with 64-bit support."
    "Similarly, the Cortex A53 is a tweaked version of the Cortex A7 with 64-bit support."

    The 64-bit Cortex A57 and Cortex A53 are directly based on the existing 32-bit A15 and A7 respectively. That Apple's Cyclone is based on Swift really isn't a reason to dismiss it especially given how effective the results are.
    Reply
  • dugbug - Wednesday, September 18, 2013 - link

    They got 2x performance in both CPU and GPU. You want to ding it? You think this is something you just drag and drop in a powerpoint and press the "Make Chips" button? jesus... Reply
  • weiln12 - Wednesday, September 18, 2013 - link

    Great article as always. One thing I've noticed and don't undertstand is what's up with the different SKU's for CDMA? Both could be combined into the one SKU, since A1453 has the same CDMA/GSM/WCDMA as A1533 and more LTE bands than A1533 (17/26). Why have both when one will work? Seems like I'm missing something here... Reply
  • Krysto - Wednesday, September 18, 2013 - link

    Ah, so now we finally see that Imagination's "triangle throughput" and all of those benchmarks spiking way ahead of other chips before, were just BS. ARM was right to call Imagination on it. Those benchmarks never mattered for gaming performance, as we can see with the new iPhone, yet Anand kept showing them at the top of each benchmark to show how "impressive" the Imagination GPU was.

    Glad to see you admit how wrong you were, Anand.
    Reply
  • Mondozai - Wednesday, September 18, 2013 - link

    You just keep hating on Apple/Anandtech throughout the entire thread.
    Honestly, you're just Krysto.
    Reply
  • Mondozai - Wednesday, September 18, 2013 - link

    just sad*
    (love that fact that there's no edit button)
    Reply
  • Wilco1 - Wednesday, September 18, 2013 - link

    Lots of us agree most of Anand's benchmarks are rubbish. Sorry but in 2013 we shouldn't have a major tech site using JS benchmarks and pretending they show CPU performance. Reply
  • tabascosauz - Wednesday, September 18, 2013 - link

    This is why you can doubt Apple's claims in any area OTHER than graphics.

    G6430...jesus christ. Literally destroys every other mobile GPU other than Adreno 330 which can actually put up a fight.
    Reply
  • vision33r - Wednesday, September 18, 2013 - link

    This is proof that Samsung was very much a copy cat from the start. Once the silicone is no longer produced and shared with them. They were caught messing with adding more cores that they have no clue how to optimize and now Apple jumped into 64bit and Samsung have to play catch up. Reply
  • purerice - Wednesday, September 18, 2013 - link

    Samsung is like that with everything from televisions to toasters. Reply
  • dugbug - Wednesday, September 18, 2013 - link

    and now vacuum cleaners according to dyson Reply
  • purerice - Wednesday, September 18, 2013 - link

    so far little mention of battery life. I wonder if LPDDR3 has a detrimental effect on battery life compared to LPDDR2.

    I usually like Anand's work but this quote got me: Although Apple could conceivably keep innovating to the point where an A-series chip ends up powering a Mac, I don't think that's in the cards today.

    Sorry Anand, the only way ARM can do that is to attempt redesigning its chips for desktop, or to try attempting some Larrabee-type chip based on its A-series. 8 Bay Trails working together could outrun a low end Haswell on performance, wattage, and price, but if it were really that easy, Intel would be doing that already. Maybe if ARM buys AMD, but with ARM's current strategy, it just doesn't seem feasible for them to overtake Intel or AMD.
    Reply
  • Wilco1 - Wednesday, September 18, 2013 - link

    A quad core version of the current A7 would already outperform the current Haswell in the latest MacBook Air. Reply
  • stacey94 - Wednesday, September 18, 2013 - link

    No it wouldn't. What are you even basing that off? The Geekbench 3 scores?

    Even assuming that's applicable across platforms, the Haswell will have twice the single threaded performance (again, based off Geekbench scores that probably mean nothing here). This matters more. By your logic AMD's 8 core bulldozer should have outperformed Sandy Bridge. It didn't.
    Reply
  • Wilco1 - Thursday, September 19, 2013 - link

    With double the cores and a small clock boost to 1.5GHz it would have higher throughput. Single threaded performance would still be half of Haswell of course, so for that they would need to increase the clock and/or IPC further. A 20nm A7 could run at 2GHz, so that would be ~75% of Haswell ST performance. I would argue that is more than enough to not notice the difference if you had 4 cores. Reply
  • Laxaa - Wednesday, September 18, 2013 - link

    Nice review, but Iæm dissapointed about the audio capture performance. 64kbps mono is not OK in 2013, and I see that most smartphone manufacturers skips on this. Even my Lumia 920 dissapoints in this department(96kbps mono) but at least it has HAAC mics that makes it a decent companion at concerts(I think the 1020 has 128 kbps stereo)

    Why isn't this an issue in an industry where everyone guns for better video and still image performance? It seems like such a small thing to ask for.
    Reply
  • ddriver - Wednesday, September 18, 2013 - link

    Same thing as with the lack of ac wifi and lte-a, with so much improvements in the 5s, apple really needs to hold back on a few features so it can make the iphone 6 an attractive product. You can pretty much bet money that the iphone 6 will fill those gaps, deliberately left gaping. Reply
  • steven75 - Friday, September 20, 2013 - link

    If that's what you think, I can only imagine what you thought about the Nexus 4 with no LTE at the time that ALL phones came with LTE (not just flagships) and the Moto X with it's middle tier internals yet flagship pricing. Reply
  • teiglin - Wednesday, September 18, 2013 - link

    I'm a bit baffled by the battery life numbers. Specifically the difference in performance relative to the 5/5c on wifi vs. cellular. Given that they are all presumably using the same wifi and cellular silicon, why is there such a dramatic relative increase in the battery life of the 5s compared to the 5/5c moving from wifi to LTE? I don't see why the newer SoC should be improving its efficiency over LTE vs. wifi; if anything, I'd expect a good wifi connection to feed data to the platform faster than LTE, allowing the newer silicon to race to sleep more effectively.

    Were all the tests conducted on the same operator with comparable signal strength? Obviously you can't do much to normalize against network congestion--a factor almost certain to favor tests run in the past, though perhaps middle-of-the-night testing might help minimize it--but what other factors could account for this difference? Do you have any speculation as to what could cause such a huge shift?
    Reply
  • DarkXale - Wednesday, September 18, 2013 - link

    In wireless communications, power draw from the CPUs is considered negligible. Its transmitting the actual symbols (the bits) that costs massive amounts of power. So much in fact that compressing it will normally yield battery savings. Similarly, Anand makes a mistake here on the second to final page - higher data rates are -not- more power efficient, they are less so. Reply
  • teiglin - Wednesday, September 18, 2013 - link

    It has been my experience that the SoC and display are more power-hungry than cellular data transfer in terms of peak power consumption. That's just anecdotal of course, based on comparing battery drain from an active download vs. screen-on-but-idle vs. screen-on-and-taxing-cpu and such. And if you're actually saying that SoC power draw in smartphones is negligible, then please just stop; I'm assuming you're just arguing that baseband/transceiver power is higher.

    Anand and Brian have always argued that newer, faster data transfer standards help battery life because generally those standards run at comparable power levels to the old ones but get tasks done faster, so for the same load (e.g. their battery life test). I'm not an expert in wireless communications, but their numbers have always borne out such arguments. I look at is as analogous to generational CPU improvements--they get faster and can spend more power while completing tasks, but total power to do a given task can be reduced by having a more efficient architecture.

    All of which is at best peripheral to my actual question, since I was asking about differences within the same communications standards at (presumably) the same theoretical data rates, but I guess Anand and company have stopped reading comments. :(
    Reply
  • DarkXale - Wednesday, September 18, 2013 - link

    No, I'm arguing that the power requirement from intensifying the modulation scheme (which, along with MIMO - is how you increase bandwidth efficiency) is significantly higher than the power drain from having the SOC in a low use state.

    The reason I mention power requirement to the transmitter is because we -cannot- reduce the minimum amount of power required in order for the receiver to correctly interpret incoming symbols. (Assuming noise is constant)

    The sender has to put X db into the air, or the receiver will think you sent signal 213 instead of signal 87. (Note: ECC handles some errors, but won't work if the error rate is too high)
    Reply
  • teiglin - Wednesday, September 18, 2013 - link

    Ah, that makes more sense then. I wasn't aware that you actually had to transmit at higher power for LTE compared to UMTS (or whatever)--I thought the primary power difference was more sophisticated DSP on the baseband--but it's been 10 years or so since I took a DSP course. I still maintain that it hasn't been my experience that an active LTE radio consumes more power than an active SoC--but such impressions are certainly unscientific.

    I still wish someone could answer my original question, but you provide an interesting insight. :)
    Reply
  • LazyGeek - Wednesday, September 18, 2013 - link

    Quite an impressive Chipset indeed. Anand's review is by far the most professional review i came across on the Web, which actually spoiled me to a point that, cant go back to the ordinary/lame reviews in some(most) of the other tech sites. Reply
  • dfdfffewqf - Wednesday, September 18, 2013 - link

    We'll start with our WiFi battery life test. As always, we regularly load web pages at a fixed interval until the battery dies (all displays are calibrated to 200 nits).

    In battery test, what does "a fixed interval" mean? The meaning of this benchmark will be different depending on whether it is end-to-end interval or end-to-start interval.
    Reply
  • doobydoo - Wednesday, September 18, 2013 - link

    It's painfully obviously a start-to-start interval, so that every device is carrying out the same amount of 'work' to get a meaningful battery life benchmark for a fixed workload. Reply
  • dfdfffewqf - Wednesday, September 18, 2013 - link

    yes i agree with that it should be, but they'd better show us the tool or the test web page used in the benchmark. Reply
  • vomitme - Wednesday, September 18, 2013 - link

    "Video preview in slo-mo mode also happens at 60 fps compared to 30 fps for the standard video record and still image capture modes."

    I'm pretty sure still image capture preview is at 15 fps. at least on my iphone 5 (both ios6 and ios7). The video capture preview is significantly smoother
    Reply
  • kyuu - Wednesday, September 18, 2013 - link

    Apple is pushing out some pretty great SoC designs, no doubt about it. Too bad they are and will always be tied to iOS.

    Hopefully the other SoC vendors will at least take note of this and work on single-threaded performance and memory subsystem performance (the areas where Apple's SoCs excel, I believe) instead of meaninglessly pushing up core counts in mobile devices. I also really hope this kicks Intel's and AMD's butts into getting really serious about their mobile silicon.
    Reply
  • jasperjones - Wednesday, September 18, 2013 - link

    I don't think the 64 bit discussion is well-balanced. The same data (including, most importantly, code) needs more space on 64 bit than on 32 bit due to the fact that pointers are twice as long, etc.

    Given that non-volatile storage is still a binding constraint in the mobile space, there's a downside to going 64-bit early: after OS and apps are installed, you got less free space on your flash memory.
    Reply
  • Laxaa - Wednesday, September 18, 2013 - link

    Doesn't it use more RAM as well? Reply
  • dugbug - Wednesday, September 18, 2013 - link

    negligible. Your art and audio assets don't change and are overwhelmingly part of the bulk. Frameworks are part of the OS. Just the application code is subject... I don't see it as much of an issue but we will see. Reply
  • Dug - Wednesday, September 18, 2013 - link

    Read the developers comments. It's not a problem. You are only looking at one small aspect of 64bit and not the whole picture. Reply
  • zunairryk - Wednesday, September 18, 2013 - link

    Good job Anand and the team. This is what you call a REVIEW. Full hardware overview and explanation + opinions that are backed up by THOROUGH testing. Reply
  • lukarak - Wednesday, September 18, 2013 - link

    I wonder if Apple will go for the quadruple resolution for it's bigger iPhone. If it's 4.8 inch, it would be 2272x1280, or 543 PPI, going to 5 inch would lower it to 521, not so far off this year's 468 in HTC One Reply
  • Infy102 - Wednesday, September 18, 2013 - link

    64-bit support to iOS does mean more storage and RAM use. The phone needs more storage for 32-bit and 64-bit versions of the runtime libs and when running the both run time libs need to be loaded in memory too. Reply
  • Laxaa - Wednesday, September 18, 2013 - link

    So 1GB of RAM might be a bit on the short side? Reply
  • Infy102 - Wednesday, September 18, 2013 - link

    I suppose one could say that but I'm sure Apple has made sure it wont degrade user experience. Reply
  • Laxaa - Wednesday, September 18, 2013 - link

    But still it seems a bit low, seeing that the competition have had 2GB for almost a year now. Reply
  • lukarak - Wednesday, September 18, 2013 - link

    its, not it's :D Reply
  • uhuznaa - Tuesday, September 24, 2013 - link

    Going this way is the only way Apple can go without leaving most apps behind. I guess exactly this is the reason we're still waiting for a larger iPhone, such resolutions are still a year off or so. The current resolution on a larger screen would be just ridiculous and anything in between is very unevenly supported with most apps.

    But then with iOS 7 more pixels will be more or less wasted anyway, there's hardly any detail in there...
    Reply
  • camelNotation - Wednesday, September 18, 2013 - link

    Hi Anand,

    Do you know the frequency the LPDDR3 is running at? Just curious, thanks!
    Reply
  • zephonic - Wednesday, September 18, 2013 - link

    Thanks for the thorough review.

    About the CPU, there was a much-quoted op-ed on Apple Insider that argued that Samsung did not know about the A7 and therefore it must have been manufactured by TSMC.

    You conclude it is prolly safe to safe that it is based on Samsung’s 28nm HK+MG process. I would like to hear your views on that AI article?
    Reply
  • vampyren - Wednesday, September 18, 2013 - link

    I dont understand why the S800/Adreno330 isnt present i many of the charts? It cant possibly perform worse than One mini.....
    Without the complete list of all models in all test i cant take this seriously. anadtech is really good at reviews and comparison so why is the list so randomly picked here?
    Reply
  • MykeM - Wednesday, September 18, 2013 - link

    The MSM8974 Snapdragon 800 is a S800/Adreno 330 SoC. So is the LG G2. Reply
  • Tyler_Durden_83 - Wednesday, September 18, 2013 - link

    You seriously belive that encryption and other speedups on a 1/2 gb system depend on the processor being 64 bit and not to other updates to its architecture / ISA? Reply
  • ViRGE - Wednesday, September 18, 2013 - link

    Being 64bit and being ARMv8 are analogous in this case. Regardless, the article makes it very clear that the new AES instructions are part of ARMv8. Reply
  • ddriver - Wednesday, September 18, 2013 - link

    Surely, the article mentions it, but I think the charts will influence a typical apple customer way more than what text says, considering comparing numbers is a significantly easier skill than understanding the tech implications of text :) Reply
  • Tyler_Durden_83 - Wednesday, September 18, 2013 - link

    Yes the new soc gives access to both 64 bit pointers and the new isa. And yes the thoughrough review is very accurate, but most people won't understand it or skip it to the final words, where it says: "The immediate upsides to moving to 64-bit today are increased performance across the board as well as some huge potential performance gains in certain FP and cryptographic workloads."
    This is the inaccurate part. Moving to a ARMv8 cpu with architectural changes and a new isa does that. Incidently said soc is also able to address a 64 bit memory space, but that's irrelevant to the performance gains being mentioned.
    Reply
  • PC Perv - Wednesday, September 18, 2013 - link

    Hah. You still manage to plug in Intel stuff that no one cares (and nowhere to be seen) in a review of a product that will never, ever, compete with anything Intel.

    This site's reviews read more and more like those of "market analysts," who can't contain themselves to raise voices. "It makes sense for Apple to cut cost" Sigh. What does that have anything to do with consumers?

    Have you thought about disclosing your (and your wife's) financial portfolios, when you write about corporations with which you have conflicts of interest?
    Reply
  • Krysto - Wednesday, September 18, 2013 - link

    Yeah, it's weird how Anand manages to bring up Intel in most of his reviews, even though they have little to do with Intel.

    The only reason Bay Trail is even winning some of those benchmarks is because:

    1) they use Turbo-Boost temporarily to win the benchmark race, but drop down to its base clock speed as soon as it overheats, while I'm sure Apple's 1.3 Ghz CPU doesn't do the same.

    2) Bay Trail is a damn TABLET CHIP. It's not a smartphone chip. So if you want to compare Intel's smartphone chips with the competition, you should wait until Merrifield, Anand.

    Even if Merrifield maintains the performance of Bay Trail (I doubt it, it's going to be less), it's going to arrive six months later, and at that point the competition will be stronger. This is why it's not fair to compare a tablet chip, with all the others being smartphone chips.

    Intel released the tablet version first on purpose, precisely to tech reviewers like Anand praise Intel for its chips' performance, even though it's unwarranted.
    Reply
  • Wilco1 - Wednesday, September 18, 2013 - link

    I completely agree it was totally unprofessional to sneak in an Intel development board into a phone review. Will we see A57 scores in future Intel phone reviews as well? I very much doubt it.

    Indeed I'd expect that when Bay Trail appears in phones, it will run at a lower max frequency, probably similar to Z3740. That will make it lose most of the JavaScript "benchmarks". However even the fastest BayTrail cannot compete with the Apple A7 in real CPU benchmarks like Geekbench 3, whether you compare 32-bit mode or 64-bit mode. And in terms of IPC, BT is beaten by A15 and A57 with a good margin, and now Apple A7 beats them all.
    Reply
  • ddriver - Wednesday, September 18, 2013 - link

    What is wrong with this review it tries to imply other arm v8 chips won't get the same performance improvements as apple's twist on v8, which is PLAIN OUT WRONG. Truth is all v8 cores will get the upgrades - twice as many GP registers, twice as wide, same for the SIMD units, even SHA and AES in hardware. Surely, bias toward apple is nothing new at AT, and it is understandable that there is an intention to convince people to get an apple product now rather than wait a bit and get an android phone with a v8 chip, with larger screen, more memory and better features, but I don't think it is fair or even decent to make such claims devoid of facts to support them. Especially considering that the tech facts say otherwise. Reply
  • ddriver - Wednesday, September 18, 2013 - link

    Same thing goes for including SHA and AES results in the chart that is supposed to display 64bit code performance improvements, considering that this boost in performance is not due to the width of the architecture but due to hardware implementations, which would be just as fast in 32bit mode... if they were available. I guess that chart really needed those to make it look better rather than the mixed bag that it really is. Reply
  • raghu78 - Wednesday, September 18, 2013 - link

    anandtech does not day that other ARMv8 cores cannot get similar or better performance. you are making baseless accusations. Every ARMv8 ISA core implementation is not the same. its like saying AMD bulldozer and Intel sandybridge are the same. they support the same instruction sets but are very different in the micro architecture and implementation.

    None of the ARM SOC vendors have an ARM v8 core shipping today. The earliest you can see Qualcomm come out with an ARM v8 core would be H2 2014 on 20nm. Samsung uses ARM designed cores. so they too would come out with Cortex A57 on 20nm a year from now. The only company shipping a ARMv8 core today is Apple. They are atleast 9 - 12 months ahead of the competition. Credit should be given to Apple for aggressively improving their core and getting it to market well ahead of the competition. Heck Apple beat Intel by 6 months. Merrifield for smartphones won't be shipping till MWC 2014 in feb 2014.
    Reply
  • ddriver - Wednesday, September 18, 2013 - link

    Yes, because Samsung doesn't have a fanatical devotee user base to enable the kind of profit margins apple aims for. That is why other manufacturers will wait until it actually makes sense to go for a v8 design. It is not like users need that much performance in a mobile phone, it is apple who needs that for the PR it needs to fix its declining sales. Reply
  • lukarak - Wednesday, September 18, 2013 - link

    If users don't need more performance, why is then everybody and their brother shipping quad cores, octa cores, tons of ram, and touting that as the advantage over Apple who is only dual core :D

    Performance serves a much bigger purpose than just faster processing of user's data. It enables new features.

    You sir are just a little frustrated troll.
    Reply
  • ddriver - Wednesday, September 18, 2013 - link

    Especially on mobile phones, the predominant reason for people to want performance is because foolish people are easily impressed by numbers. Not a single person who really needs and cares about performance would even consider a phone, much less run a performance critical application on a phone, of which there are none.

    Name-calling might work in your tiny head, but outside in the real world this is just you resorting to rudeness after failing to substantiate your claims with solid facts.

    But be my guest, do tell me what extra features has the extra performance enabled in 5s? What is it that you can do with the 5s that you can't do with the 5?

    Also, purpose is not "bigger" but "greater".
    Reply
  • steven75 - Friday, September 20, 2013 - link

    "the predominant reason for people to want performance is because foolish people are easily impressed by numbers."

    Don't insult yourself.
    Reply
  • lukarak - Sunday, September 22, 2013 - link

    Well, you can just compare yourself. For example the features on the 4, which is the last one to get iOS 7, and the 5S. They may both be on iOS 7, but there are important differences, or better yet, omissions.
    You know, there's a reason a phone can, for example, record a higher resolution video, of higher fps video, than just the megapixels on the sensor.

    The simple fact is that Apple provided the consumers with something extra. Do they need it? That's only for their wallets to decide, not Apple, not me, and certainly not you. If they don't need it now, they sure will in 2 years. Which will make their phone last longer, or make it hold a greater resale value. Each of which is a enormous entry on the plus side of the list.

    Pointing out that people don't need a superior product is just pathetic, and no good intentioned person would say it.
    Reply
  • dugbug - Wednesday, September 18, 2013 - link

    I think you ignored the rational response and countered with a snide comment. Samsung is only a processor licensee I believe, and have chosen to forsake their own CPUs for others. I don't know why, but they tend to be all over the map. Apple has a deep, patient, meticulous plan for their products and invest heavily to see it move forward. If you don't like apple, whatever I don't care, but they tend to drive the mobile industry. Reply
  • ddriver - Wednesday, September 18, 2013 - link

    Also the time-frame you mention concerns server chips and also server platform infrastructure, which will take the bulk of the time. As for the v8 cores themselves, they are pretty much ready for manufacturing. Maybe apple will indirectly benefit other brand customers by forcing other manufacturers into faster v8 adoption, depending on how big of a deal will apple manages to make out of it. The server market is where v8 will really shine because of its extended addressing space, and server is where most of the money on ARM chips will be made anyway, significantly more than consumer devices. Reply
  • Krysto - Wednesday, September 18, 2013 - link

    Rumors say Samsung is releasing its own CPU core next year, too, most likely based on ARMv8, and probably in H2 2014, too, just in time for the next Galaxy S flagship.

    The only reason Apple beat everyone to ARMv8 is because they didn't redesign their CPU core, they just tweaked the old one. Samsung, Qualcomm and Nvidia are all creating their own ARMv8 cores from scratch.
    Reply
  • dugbug - Wednesday, September 18, 2013 - link

    what will it run? 32-bit android? Samsung executive just made that statement when pushed by media. They may go ARMV8 your guess is as good as mine, but right now its time to look at and appreciate the astounding accomplishment in this phone rather than backhand them because it wasn't your favorite team that scored. Reply
  • thunng8 - Wednesday, September 18, 2013 - link

    Wow, a slightly tweak dual core processor stomps all over their quad core Arm competitors.

    And where did you hear Samsung or Nvidia are designing their own CPUs? Even ARM's reference design A57 is based on the A15 and Samsung and Nvidia are currently using the A15.
    Reply
  • steven75 - Friday, September 20, 2013 - link

    When was the S4 release? Last month? How embarrassing for Samesung! Reply
  • Dug - Wednesday, September 18, 2013 - link

    Again you are making assumptions about something that isn't even out and you haven't used. You again are not taking into account the entire platform as a whole.

    But I regress, you are right. We should all wait and not buy an iPhone because some other future phone will have stuff we may or may not want.

    Personally I think you should skip v8 of anything and wait for v9 because that will have more features.
    Reply
  • KurianOfBorg - Wednesday, September 18, 2013 - link

    I stopped caring about iPhone benchmarks. The iPhone CLEARLY curbstomps EVERY OTHER DEVICE in realworld performance. Not even a single device can even REMOTELY come close to the consistent framerate and low touch latency of the iPhone. Reply
  • Proupin - Wednesday, September 18, 2013 - link

    Apple's competence in writing software is way underestimated and overshadowed by their fancy design... but yeah, their software is masterfully engineered and always works very, very smooth. Reply
  • Calista - Wednesday, September 18, 2013 - link

    It was a well balanced review, bringing focus to the improvements (touch ID, new camera, etc) while being clear about the cons the phone still bring with its smaller screen. And the A7 seem like a very well done SOC. As other have noticed, creating a much smaller phone than the competition while still besting them in many if not all areas is extremely impressive.

    It will be very interesting to see how consumers react. Android have conquered 80 percent of the global market the last few years while Windows Mobile and BB went for oblivion. Apple on the other hand have been able to keep on to its 15-20 percent market share, obviously on the expense of WM/BB/Symbian. But they can no longer shed market share (since it's more or less non-existing) so the next great shakeout will come with iOS vs Android.
    Reply
  • xype - Wednesday, September 18, 2013 - link

    The Global Marketshare Argument is something that a lot of people see as a downside to Apple’s strategy, but it is not a simple one. Apple is a lot about the ecosystem and, to be honest, an ecosystem that is at an inherent marketshare disadvantage if we are including worldwide markets like ("rural" *)) China, India, Africa, ("rural") Russia, etc.

    The fact of the matter is that those countries are not brimming over with people who have thousands of dollars to spend on devices, software and content, and there is no way Apple could serve those markets in a meaningful way. I mean, offer a $199 iPhone and then what? Hope those users will magically start spending tens of dollars per month of content? Or sell hardware at a non-existing margin and hope to be able to provide the same kind of premium services for broken devices etc that they can afford currently?

    The global marketshare numbers are a relatively meaningless metric if it can’t translate into real profits and I doubt Apple is ever going to go after that. They’re quite ok with a 40-50% marketshare in premium markets, I think.

    *) "Rural" in the sense of smaller cities with populations that is not as wealthy.
    Reply
  • lukarak - Wednesday, September 18, 2013 - link

    Market share should not be viewed as only units shipped. That's at best secondary. Reply
  • sfaerew - Wednesday, September 18, 2013 - link

    Benchmarks(GFXBench 2.7,3DMark.Basemark X.etc.) are AArch64 version?
    There are 30~40% performance gap between v32geekbench and v64geekbench.
    INT(ST)1471 vs 1065.
    FP(ST)1339 vs 983
    Reply
  • iwod - Wednesday, September 18, 2013 - link

    Two things I really want to know i wish someone could answer me.

    1. Faster NAND? The 5 wont exactly market leading in NAND performance.

    2. How did Apple manage to fit in more bands in iPhone and everything on the wireless side were the same. Specially it was one of the reasons not to support 2600Mhz when iPhone 5 debut.
    Reply
  • damianrobertjones - Wednesday, September 18, 2013 - link

    " In many cases the A7's dual cores were competitive with Intel's recently announced Bay Trail SoC"

    Do you mean, "In the limited cross platform tests, which were mainly web based using a different browser, the Apple phone still didn't win in many tests"?
    Reply
  • xype - Wednesday, September 18, 2013 - link

    Ooh, I get it. You’re implying the A7 sucks! Heheee, clever! Reply
  • mschira - Wednesday, September 18, 2013 - link

    Hold on, Bay Trail and AMD A4-5000 are not desktop grade performance but rather lowest end notebook performance.
    So while it may be fast for a phone, calling it desktop performance is simply ridiculous.

    Now with Apples claims in mind I am VERY scared that Apple might soon decide that the iOS is ready for prime time and replace the proper Unix type OSX with that Micky Mouse system. They will probably start with the 11" mac book air machines (whom I love deeply!).

    I wish this day is far in the future but I have a bad bad feeling.
    Reply
  • Qwertilot - Wednesday, September 18, 2013 - link

    Baytrail is better than that :) Its due to be pushed a fair way up the heirarchy of cheaper desk/laptops. Probably with faster clocks/graphics etc than the one tested here of course. (Which is I think the tablet version?).

    With the core count/clock speeds on the A7 both relatively low it looks unarguable that you could build a very good budget laptop chip from this. More a matter of whether there's any motivation for Apple to do so, which is somewhat less obvious.
    Reply
  • ShAdOwXPR - Wednesday, September 18, 2013 - link

    Wait for the iPad 5 numbers... Reply
  • Sampath.Kambar - Wednesday, September 18, 2013 - link

    Hi Anand,

    I don't understand, why your reviews have never included "Nokia Lumia" Series of Phones for comparison.!! Are not they smartphones?
    Reply
  • ddriver - Wednesday, September 18, 2013 - link

    He said it is because he doesn't have one. And at any rate, all current lumias are immensely underpowered devices, they only stand a chance in the camera department, which is hardly th 5s' main selling point. Maybe the 1520 will finally step up with a better CPU. Reply
  • david.steadson - Wednesday, September 18, 2013 - link

    Well I just ran sunspider 1.0.1 on my 11 month old Lumia 920 and it came up with 903ms, which puts it 7th ahead of the Galaxy 4 and just behind the iphone 5 Reply
  • darkich - Thursday, September 19, 2013 - link

    ..which yet again is a clear proof of just how software dependant sunspider is.
    The S4 has more than twice faster processor and thats a fact.
    The score simply shows that WP is better optimized for the script, it tells next to nothing of how powerful the hardware is.
    For example my S3 with Dolphin Jetpack gets greater score on HTML 5 test (up to 492 points), which is more than even Snapdragon 800 can manage on a Google Chrome.
    Try the same test on your Lumia and then get back to me with the numbers you got
    Reply
  • piroroadkill - Wednesday, September 18, 2013 - link

    The fact that a well tuned dual core SoC performs better than most of the Quad Core monsters is of 0% surprise to me.

    I've been arguing for ages I don't want to buy an android phone with a quad core CPU. This is pretty much why I recommend Xperia SP. Fast dual core, fast GPU, reasonable price off contract, and a screen that isn't retarded-huge.
    Reply
  • ddriver - Wednesday, September 18, 2013 - link

    This has nothing to do with tuning, the performance is micro-architecture defined. A quad core v8 chip will smoke it, provided of course the benchmark scales all the way to 4 cores.

    A big screen might be "retarded" to you, but plenty of people make good use of screen estate. It won't be long before apple joins the phablet bandwagon, mark my words. This will give them excuse to jack the profit margins even higher.
    Reply
  • darkcrayon - Wednesday, September 18, 2013 - link

    And much more importantly, providing the OS and apps scale well to 4 cores, which doesn't seem to be the case so far. Reply
  • Hyper72 - Thursday, September 19, 2013 - link

    iOS and OS X "scales" excellently. In iOS 4 they also introduced Grand Central Dispatch which makes it very easy to develop multi-threaded applications.
    However, I agree completely that there's very few apps on any mobile platform that actually scales well beyond 2 cores.
    Reply
  • thebeastie - Wednesday, September 18, 2013 - link

    Fantastic review Anand, as usual you set the benchmark in reviews. Reply
  • ddriver - Wednesday, September 18, 2013 - link

    On a side note, the "CPU performance" page should really be renamed to JavaScript performance, seeing how it only consists of JS benchmarks and not a single native application. Reply
  • theCuriousTask - Wednesday, September 18, 2013 - link

    Anand, can you create a battery life test that leverages the GPS and other services to see what factor does the M7 chip play compared to the iPhone 5 in extended use in terms of battery life? A test that mimics day to day usage where the phone is not always on running a web browser test. Reply
  • les.moor@ymail.com - Wednesday, September 18, 2013 - link

    "Lipstick on a pig" is a nice way to describe the new color choices. Nice review, but this phone is a snoozer. Reply
  • darkcrayon - Wednesday, September 18, 2013 - link

    The 5c maybe. The 5s looks like a pretty awesome upgrade, especially from the 4s. Reply
  • abbati - Wednesday, September 18, 2013 - link

    I think the names of the iPhone is going to be more streamlined now... there'll be an iPhone 6s and 6c. It would'nt make sense to have an iPhone 6c and an iPhone 6.

    Great Review as always.... Thanks for all your effort!
    Reply
  • ddriver - Wednesday, September 18, 2013 - link

    Hate to be that guy again, but isn't anyone else gonna touch the fact that iOS and Android use entirely different JS engines? Comparing apples to oranges much?

    How about more native benches comparing to other arm chips? The tegra 4 bench on Engadget shows tegra 4 being faster than this chip. Conveniently enough, geekbench only compares the old apple chips to the new and the new chip between 32 and 64 bit modes.

    Nice try anand... For a moment there you almost fooled me ;)
    Reply
  • A5 - Wednesday, September 18, 2013 - link

    Shield has active cooling. I'd be shocked if it puts up those numbers in a smartphone form factor.

    Are there even any announced Tegra 4 smartphones coming?
    Reply
  • ddriver - Wednesday, September 18, 2013 - link

    But still, tegra 4 is still ancient v7 32bit architecture. You mean a tiny fan is all it takes to diminish the advantage of apple's great chips? Reply
  • darkcrayon - Wednesday, September 18, 2013 - link

    You mean a larger device running a higher clocked chip and using more power is all it takes? Yes, that is all it takes. The fact anyone would compare what's in the physical limitations of the 5s vs the Shield is pretty telling in favor of how great the A7 is. Reply
  • ddriver - Wednesday, September 18, 2013 - link

    Said the guy who named himself after an apple chip LOL. Reply
  • UpSpin - Wednesday, September 18, 2013 - link

    I agree with you. It's stupid to use browser benchmarks as a measure of the CPU performance. It heavily depends on the used browser, version, and OS. You can't even use browser benchmark to compare the CPU performance of devices running Android 2.3 to those running Android 4.0 with them running Android 4.3. How in the world shall it be legitimate to compare them between two totally different systems.
    And finally, iOS is closed source and totally restricted, Apple can do whatever they want with it and no one would know (like using different JS/Browser versions on iOS7 depending on the used SoC or device, like using a more optimized version for A7 than they use for A6)

    To measure the raw CPU core! performance one's only option is number crunching benchmarks like it gets done on the PC (Prime, ...)
    To test the whole SoC, which is a collection of memory, I/O, CPU, GPU, ... across different devices with totally different software, one has to rely on other benchmarks, similar to how it gets done on the PC world. But there you also don't say that a Windows PC using IE is magnitudes slower than a Mac using the identical hardware but the faster Safari.

    A browser benchmark is a browser benchmark, nothing more and not in the slightest a CPU benchmark.
    Reply
  • Dug - Wednesday, September 18, 2013 - link

    Apples to Oranges? Yes. They are different platforms. Does that bother you?

    Yes they do use different JS engines. They also use different OS's.
    It also shows how Apple is able to optimize it's own use of it.

    It also trounced everything at Google's Octane Benchmark.
    It also beet everything in Browsermark.

    From your comment, you seem to want everyone to use the least common denominator instead of optimizing for their own system. Why?
    Reply
  • Krysto - Wednesday, September 18, 2013 - link

    Anand, I think you're wrong about the reason why the new iPhone GPU sucks in physics. You said it's because it has half the CPU cores.

    But hang on a minute - isn't that a GPU test? Also, isn't it true that some GPU makers dedicate space for parts that are better at physics? I think I read something about Adreno 330 being much better at physics, kind of like those Mali T678 or whatnot. Physics is about GPGPU, too, not just CPU, is it not?
    Reply
  • A5 - Wednesday, September 18, 2013 - link

    I don't think anyone does GPGPU on phones yet. Android has some extremely experimental OpenCL support, but it hasn't been on a shipping device yet. Physics on phones is still a purely CPU-driven affair. Reply
  • bod - Wednesday, September 18, 2013 - link

    M7 is outside, because if it were inside, there would be too much leakage current in the A7 even if the rest of the SoC was power gated. I suppose :) Reply
  • debacon - Wednesday, September 18, 2013 - link

    A thorough review. Great help in eliminating many questions when trying to make the choice for upgrade, or outright purchase. Thank you. Reply
  • supergex - Wednesday, September 18, 2013 - link

    Tired of seeing comparisons with Android or other Apple products. Don't you think a couple of the best Windows Phones or even Blackberry could have some room over here? Reply
  • darwiniandude - Wednesday, September 18, 2013 - link

    Yeah, may as well compare Palm OS and Symbian while you're at it. Reply
  • Gridlock - Wednesday, September 18, 2013 - link

    I'd be curious to see if either has any attributes that define or differentiate it (cellular talktime or performance on the Blackberry or photography on the Nokia for instance).

    I suspect, at least as far as BB goes, the answer is probably 'no'.
    Reply
  • Eug - Wednesday, September 18, 2013 - link

    "First off, based on conversations with as many people in the know as possible, as well as just making an educated guess, it’s probably pretty safe to say that the A7 SoC is built on Samsung’s 28nm HK+MG process. It’s too early for 20nm at reasonable yields, and Apple isn’t ready to move some (not all) of its operations to TSMC."

    Anand, how sure are you that it is Samsung? The chip numbering on the A7 is different according to teardowns of leaked parts, which caused some to speculate that it is a TSMC part.

    Judging by analyst rumours, Samsung still makes sense to me for the A7 (with maybe A8 or other chips going to TSMC), but I'm just curious.
    Reply
  • ettohg - Wednesday, September 18, 2013 - link

    So, I just had to check, sorry.
    In the iPhone 5 review, posted on October 16, 2012, the iPhone 5 had a score in Kraken that was 19618 . In this review, the iPhone 5 has a score of 13919 .
    In the previous review, the iPhone 5 also had a score of 1672 in the Google Octane Benchmark v1, while in this review the iPhone 5 has a score of 2859.
    I also checked, it's the same software version. Is something wrong here?
    Maybe the improvements are not from the chip architecture but from iOS 7 ?
    Also, are you *honestly* telling me that you repeated ALL the tests with the iPhone 5 instead of just copying-pasting the results?
    Reply
  • thunng8 - Wednesday, September 18, 2013 - link

    iPhone5 also benefits from improved software. You'll notice 5s results are much higher than the 5 as well. Reply
  • robinthakur - Wednesday, September 18, 2013 - link

    Don't try to explain it to them with facts, it's like the South Park episode "Simpsons Did It" I appreciate the innovation from the 5s and it guarantees that Apple will receive the £600 yearly iPhone renewal once again when I sell my 5. Reply
  • robinthakur - Wednesday, September 18, 2013 - link

    Agreed, the screen on my old galaxy 3 was awful enough for me to dump it and get a iPhone 5. Weird bluey-green tinting, incredibly fuzzy text, unreadable in sunlight and over saturation. The trick with turning on only the active pixels is a nice one, but I'd still rather have an accurately calibrated screen that doesn't present content incorrectly. Reply
  • Gorgenapper - Wednesday, September 18, 2013 - link

    This was the reason why I passed on the Galaxy S3, even though a lot of review sites were touting it as having super crisp images and text, eye popping colors, and so on. I saw a demo unit in person, and the screen was simply not comparable to that of my iPhone 4S. Skin tones were orange on the GS3, yellow images were greenish, and through all of it I could notice the pixellation from the pentile arrangement of the LEDs. Also, the demo unit had screen burn in after only a week despite the display changing every second or so, while the demo units for the iPhones were still going strong and bright. AMOLED is crap, give me LCD any day. Reply
  • robinthakur - Wednesday, September 18, 2013 - link

    You aren't comparing like with like though. The people who want a crazy 41MP feature phone with a camera that juts out of the back (meaning it won't lie flat) are not the same ones that want a premium iPhone. In truth, I don't think there will be a big demand for it. Yes I'm sure that the picture performance on a 1020 would absolutely wipe the floor with the camera in the 5S, but I would never consider buying one because it looks ugly and impractical and harkens back to the bad old days of phone design where function bested form IMO! Reply
  • KeypoX - Wednesday, September 18, 2013 - link

    Apple's biggest advantage is being a second mover. Not a first. They have never been first in anything.

    Not first in:
    Touch screen device/phone
    App store
    Tablet
    Finger print reader
    High res screens

    The biggest advantage is they are SECOND movers and take these devices to the next level.
    Reply
  • dugbug - Wednesday, September 18, 2013 - link

    Panties in a bunch? WTF is up with people and apple. Reply
  • Gridlock - Wednesday, September 18, 2013 - link

    The Newton alone makes most of your arguments laughable. Reply
  • André - Wednesday, September 18, 2013 - link

    The iPhone 4 was the first phone to ship with more than 300 ppi screen 960 x 640 3,5".

    They were the first with FireWire, USB, Thunderbolt, DisplayPort and shipping a product with an PCIe SSD.

    First to ship a device with an Rogue implementation and going with a fully custom ARMv8 64-bit processor.

    If anything, they are not second movers and because they are so vertically integrated they can control both hardware and software.
    Reply
  • code65536 - Wednesday, September 18, 2013 - link

    So is it safe to assume the A7 has an out-of-order execution core? Reply
  • ViRGE - Wednesday, September 18, 2013 - link

    Swift was already OOE. Reply
  • tipoo - Wednesday, September 18, 2013 - link

    A6/Swift already did. A7 may be more out of order, it's not an all or nothing thing. Reply
  • cknobman - Wednesday, September 18, 2013 - link

    Here is the long/short of this an every other Apple review.

    Anand (and any other reviewer), whether consciously or sub-consciously, is compelled to write a Apple product review that spins everything in a positive or "not too negative, glass half full" light no matter what the facts really are.

    Why? Because if he (they) don't they will be pulled from Apples teet and no longer given their products pre-release (and likely for free) to review.
    Reply
  • repoman27 - Wednesday, September 18, 2013 - link

    Here is the long and short of this and any other review posted online these days:

    Whatever is said by the reviewer, the comments section is flooded with posts by whiney haters, obsequious fanboys and a good dose of trolling from both sides.

    If you're talking about Anandtech reviews, you are presented with a considerable amount of empirical data combined with a pretty solid technical analysis as well as subjective impressions. If you disagree with the reviewer's conclusions, believe that certain data points are erroneous, or feel that their subjective impressions are biased, why not present your own research, analysis or hands on experiences?

    What are the facts? That you don't enjoy seeing positive reviews of Apple products and are therefore suggesting that the only way they could be garnering such praise is due to a breach of journalistic integrity for the sake of getting a couple phones one week before the general public? Face it, the way the review system works (aside from Consumer Reports) is that the OEMs provide review samples in the hopes of getting positive press for cheap, and the reviewers need to cover the latest kit first so they can attract as many eyeballs as possible to their publications in order to generate ad revenue.
    Reply
  • Dug - Wednesday, September 18, 2013 - link

    repoman27- Couldn't have said it better myself.

    cknobman- if something is better than wouldn't that be positive? Or should Anand just make snide comments about Apple to make you happy. I don't see how getting 50% more performance would be looked at any other way.
    Reply
  • darkcrayon - Wednesday, September 18, 2013 - link

    Whiners hate it when the facts present a pro-Apple bias. Reply
  • Streamlined - Thursday, September 19, 2013 - link

    Great comment. The design and engineering by Apple is at such a level that anyone who refuses to acknowledge it is a blind hater looking for something to gripe about. Reply
  • drtaxsacto - Wednesday, September 18, 2013 - link

    Thanks very much for such a thorough review. The detailed benchmarks and other comments were well done. Reply
  • mattyroze - Wednesday, September 18, 2013 - link

    The first page is all about the color and case. Shouldn't that be the last page? If at all? This tells me the iPhone 5S is nothing more than a status update in the world of device computing. Thanks Reply
  • scottwilkins - Wednesday, September 18, 2013 - link

    I would have REALLY liked to have seen a Nokia put in the mix of these test. Especially with camera and display, if not power and all others too. Reply
  • Abelard - Wednesday, September 18, 2013 - link

    Thanks for the review. The A7 and M7 combo sound intriguing. Can't wait for the Chipworks folks to take a closer look and post some pics. I wonder what (if anything) Apple has in mind for the M7... Reply
  • ianmolynz - Wednesday, September 18, 2013 - link

    An interesting review but largely academic [Ferrari versus Porsche] Most people use their devices to communicate so the real performance bottleneck is always going to be the external cellular and data services they connect to. All the devices listed have good to great graphical performance so unless an app or mobile site is poorly architectured you won't get much in the way of on-device latency. We as consumers are spoilt for choice so it really comes down to personal preference. Reply
  • lucian303 - Wednesday, September 18, 2013 - link

    I'd hardly say browser benchmarks are indicative of actual CPU or overall performance considering the huge differences in browser implementations. It tests the speed of the browser on the hardware, not the hardware itself. Reply
  • Whitereflection - Wednesday, September 18, 2013 - link

    Normally when people write cellphone reviews, They write things that most people can understand and comparing specs that are practical and features that people actually use and care about. Keep in mind most people have one or more smartphone nowadays, You don't need to be a rocket scientist to read a review. Sadly 50% of review is written in Mars language. Reply
  • tech01x - Wednesday, September 18, 2013 - link

    You are free to read a lightweight review from any number of less competent reviewers. There are plenty on the net. Matter of fact, plenty of other reviewers are going to incorporate Anandtech's review as a fundamental part of their review. Reply
  • tredstone - Wednesday, September 18, 2013 - link

    that's very true. in fact there are lots of reviews that incorporate parts or Anandtech's review. if people cannot make sense of the technical info they can always go and find a review that places more emphasis on the pretty colours Reply
  • MikefromSA - Wednesday, September 18, 2013 - link

    I'm interested in how it performs as a phone. Did I miss that section, or was it not a consideration? Reply
  • tecsi - Wednesday, September 18, 2013 - link

    What are the implications of non-world Qualcomm chip for China Mobile? Reply
  • BSMonitor - Wednesday, September 18, 2013 - link

    I want to know if your OG iPhone is in Mint condition! Some day that'll sell on eBay for $2M! Reply
  • hydreds - Wednesday, September 18, 2013 - link

    Great review Thanks. Wish you had a video review. Not that I love to see you. I am lazy to read. Reply
  • webdev511 - Wednesday, September 18, 2013 - link

    And of course as far as handsets go this is an iOS Android only party. While I'm sure Windows Phone 8 devices would probably score lower, they certainly don't feel that way when you're using one. Reply
  • apertotes - Wednesday, September 18, 2013 - link

    "Now going back and holding an iPhone 4S, it feels like the very opposite is true - the 4S was too heavy"

    And that is why Apple can't loose. No matter what they did, the last thing they settle for its going to always be perfect. At least for most reviewers.

    Of course, this does not mean that the nimble Galaxy S2 was right. No, at the time, it was wrong. Lightness is only good starting on iphone 5.
    Reply
  • ShAdOwXPR - Wednesday, September 18, 2013 - link

    A7X win be a monster 130-150 GFLOPS that's Xbox 360 territory. Apple saying the A7 is a desktop class might be real with the benchmark numbers of the A7X... Reply
  • darkich - Thursday, September 19, 2013 - link

    I expect the GPU on A7X to be even more impressive, approaching 200GFLOPS and easily beating the PS3/XBOX 360 in terms of graphical ability because of far more capable memory.
    It should be at least on par with the Intel HD 4000 at the fraction of TDP
    Reply
  • AaronJ68 - Wednesday, September 18, 2013 - link

    I only scanned the review, as I have a few things to do this afternoon. But tonight, during the baseball game, I plan on detailed read-through. Thank you.

    On the other side of the coin, iTunes Radio just played a Ke$ha song ... so ... :)
    Reply
  • Onemanbucket - Wednesday, September 18, 2013 - link

    Anand,
    I signed up here just to say that is the best, most educated review I have ever read. I was swaying between iPhone 5S and a Windows device (925) but your clear enthusiasm for the technology her has swung it.
    Cheers.
    Reply
  • Jumangi - Wednesday, September 18, 2013 - link

    The advanced SoC is one part of making a quality day to day phone. Apple sticking to a tiny 4' screen in 2013 should be called out as unacceptable from any enthusiast site like Anandtech. Reply
  • beggerking@yahoo.com - Wednesday, September 18, 2013 - link

    only 2 core...any multithreaded benchmark comparison vs typical Android quad cores? Reply
  • kinshadow - Wednesday, September 18, 2013 - link

    What made you guess the 6430 over the 6400? I don't see anything in the article that really points either way. Reply
  • Gorgenapper - Wednesday, September 18, 2013 - link

    Recently switched from an iPhone 4S to a Galaxy S4 Active. Two observations....with regards to what Anand said about iPhone users switching out of frustration:

    1) I would have been happy with a 4.5" ~ 4.7" screen on an iPhone. The IPS LCD panels on the iPhones are of the best quality and calibration in the industry, with the HTC One's screen coming a very close second place (not sure about the LG G2's, never seen one in person). But a 4" screen is too small, let alone the 3.5" screen on my iPhone 4S, and I just got sick of waiting for Apple to wake up.

    2) iTunes

    I can see #1 coming true when the iPhone 6 is up next, but #2 will never change.
    Reply
  • darkcrayon - Wednesday, September 18, 2013 - link

    And you need iTunes for what on a daily basis? That's a plus IMO, I like having an easy option for disk based backups and sync to multiple devices. I use it on Mac OS X though. Reply
  • Gorgenapper - Wednesday, September 18, 2013 - link

    I'm on a pc.

    1) iTunes forces me to keep a separate folder structure for pictures, as I have many high res pics, zip files, rar files that would get synced regardless. I have to maintain the pics in this folder as well as those in my main picture folder(s).

    2) Movies on the device are inextricably linked to those on the iTunes library, I can't manually get those movies off the device even if I accidentally wiped out the movies on my computer.

    3) Kind of iTunes related (locked down filesystem), but the Camera Roll on my device stores the pictures in randomly-named folders, and they're all generically named IMG_****. It's a real pain to find and extract photos using Windows Explorer.

    4) I can't use any free space on the phone to transport files like a USB drive (minor issue)

    5) iTunes is slow to boot up, and the interface is not as intuitive (to me) as a simple Windows Explorer window.

    6) I have often run into a problem where I manually delete a movie off my iPhone to make room for shooting video, then go to resync it and iTunes doesn't realize that the movie is missing - I have to unsync all the movies that are currently on the iPhone, then resync them all to get that movie back.
    Reply
  • Bragabondio - Thursday, September 19, 2013 - link

    Great review. Anand is an Apple fanboy (meaning he is an active user of mac, iphone and ipad) so he has bias towards the ecosystem but his reviews are top notch regardless of his personal preferences. Although I do not necessary always agree with him I respect his technical acumen and his desire to look deeper and discover things that most casual observers miss.

    I would like to see an update on :
    a) GPS - how the navigation has improved (or not) in ios7 perhaps testing it head to head vs. Nexus 4 or 5 (in case he has a prototype) during a ride;
    b) speaker quality - this is subjective but perhaps at least loudness and is it mono or stereo
    c) phone calls quality - yes this depends on the carrier but again a subjective evaluation would be appreciated.

    A few comments on the actual review.
    graphic tests
    I love to see improvements in the graphics quality of ARM devices although I practically do not play games on my phone. Hopefully, Apple's example will push the industry towards faster development.
    Missing AC Wi-Fi is a bummer
    Regarding future-proofing I disagree with Amand it is not the CPU or GPU but the 4 inch screen is too small for web reading and the trend towards larger devices will continue thus unless you really like the screen and OS (I bet when Iphone 6 is revealed with 4.5 inch screen Anand will say that yes, it is hard to go back to iphone 5 :)
    Reply
  • WoodyPWX - Wednesday, September 18, 2013 - link

    For the web browsing battery life benchmark, it would be better to have also the number of reloads executed and the energy spent for one reload (computed from those two). There are two problems it could help solve. First, slow reloads caused by a slow connection most probably force CPU to sleep, so they actually save battery and favorize worse device. Second, faster CPU processes a web page much faster. It could draw more energy but for shorter time, so it would be a win in a normal situation. Unfortunately that's not the case in your benchmark, where another reload immediately follows. Energy spent on each reload could prove here on the 5s it will stay longer in real life situations and that's what I'm interested in. Reply
  • sfaerew - Wednesday, September 18, 2013 - link

    you are right! Reply
  • jeffkro - Wednesday, September 18, 2013 - link

    Looks kind of like the HTC one. Reply
  • joshjw - Saturday, September 21, 2013 - link

    Well i think what you are trying to say is that the HTC one looks a lot like the Iphone because the 5s looks identical to the 5 except for the home button and the 5 was released before the HTC 1 Reply
  • tipoo - Wednesday, September 18, 2013 - link

    So what was up with the Infinity Blade 3 guys saying it loaded 5X faster on the 5S? How much faster is the NAND, exactly? Reply
  • Sabresiberian - Wednesday, September 18, 2013 - link

    Apple, so far, has proved me wrong. I said they were going to be a cell phone dead end eventually a few years ago, but they have continued to innovate. The fact that they designed their own SoC has very much impressed me.

    I'm a PC guy, I'm a Windows guy; but I recognize a commitment to making a product better when I see one. Well done Apple!

    If Microsoft wants to grab more smart phone market share, then they could do well to emulate Apple in this regard. Moral of the story here is - don't follow, lead!
    Reply
  • slickr - Thursday, September 19, 2013 - link

    Anand is a shill for the NSA, don't trust the retard. He is there to tell you to buy a spy system that scans your face, that scans your fingertip and has your full name and all other information.

    This is the NSA's dream, a spy device masquerading as phone that shills like Anand the retard push.
    Reply
  • dugbug - Thursday, September 19, 2013 - link

    Your tin foil budget must be quite large Reply
  • hulkkii - Thursday, September 19, 2013 - link

    How fast is the camera compared to iPhone 5? (Can you test with CamSpeed)? Reply
  • Srinij - Thursday, September 19, 2013 - link

    We need to include the Xiaomi Mi3 when its out, its touted as the fastest. Reply
  • koruki - Thursday, September 19, 2013 - link

    Some test show its slower than a Samsung S3 Reply
  • Shadowmaster625 - Thursday, September 19, 2013 - link

    Who wants to bet that we will see a 10x increase in the number of robberies that involve limb amputation over the next 5 years? Reply
  • dugbug - Thursday, September 19, 2013 - link

    you think its easier to remove a finger from someone than threatening them to unlock their phone (which could be done with any pascode-based phone). Really. Jesus. Reply
  • koruki - Thursday, September 19, 2013 - link

    I'll take that bet.
    "But Apple promises that its reader can sense beyond the top layer of a user’s skin, and includes a “liveness” test that prevents even a severed finger from being used to access a stolen phone."

    http://www.forbes.com/sites/andygreenberg/2013/09/...
    Reply
  • hasseb64 - Thursday, September 19, 2013 - link

    MEH, Iphone 5x main problems:
    -Small battery
    -Screen to narrow, need 5mm width

    Fix this Apple and you maight get a new deal here
    Reply
  • nitemareglitch - Thursday, September 19, 2013 - link

    Still my number one site for deep dives on hardware. As always, you do NOT disappoint. I am going to upgrade now! You convinced me. I am sold. Reply
  • av13 - Thursday, September 19, 2013 - link

    Anand, thanks a lot. Having worked with IBM iSeries systems since 2000 - RS65 III or iStar and now Power systems I was stunned when techies and investors alike were shrugging off Apple's transition to 64 bit. The fact that the A7 is RISC based and 64 bit its performance is going to show in single threaded and multi-threaded apps. It was even funny when some experts quipped that the 5s has to have at least 4 GB for the 64 bit to make sense. I was very encouraged that at least Apple decided on this roadmap.
    Great analysis by you - as usual - Yawn!
    Reply
  • petersterncan - Thursday, September 19, 2013 - link

    I would really like to see how these phones stack up against the Blackberry Z10, Q10 and Z30.

    Are you considering reviewing those any time soon?
    Reply
  • ScottMinnerd - Thursday, September 19, 2013 - link

    Please excuse my ignorance, but can someone please explain how a JS-based benchmark is any indication of the quality of a CPU?

    There's so much abstraction between JS code and the CPU registers that you might as well benchmark the performances of a Ferarri vs. a school bus while they're driving over mattresses and broken glass respectively. On the same browser on the same operating system on the same motherboard using the same RAM and the same bus architecture, yes, JS code could give a relevant basis for comparison of CPUs.

    Also, does the included iOS browser have a multi-threaded JS engine? Does the Android's?

    If one were to run 4 or 8 instances of the benchmark test simultaneously, how would each instance perform on each device having each CPU? Would the metric be higher on the 4+ core devices?

    If Apple is leveraging a multithreaded JS engine, or a 64-bit optimized JS engine (or both), then the quality of the CPUs depends upon a given workload. The workload on a phone in the real world is rarely solely JS-based. Testing the performance of JS and then implying that the iPhone (or its CPU) is superior in general is not only misleading, but toadying.
    Reply
  • solipsism - Friday, September 20, 2013 - link

    Across different browsers and/or OSes it's not when comparing the same OS and same browser on that OS the tests can be use to gauge HW improvements, as shown with the iOS7 and Safari on the iPhone 5 v. iOS7 and Safari on the iPhone 5S. Reply
  • solipsism - Friday, September 20, 2013 - link

    Overall you're reading too much into it. It's just to gauge how something nearly everyone uses on a daily basis may have improved YoY between devices and OS updates. You can't deny the results are much improved even if you don't think the tests in and of themselves are viable measures of the browser's overall performance. Reply
  • HisDivineOrder - Friday, September 20, 2013 - link

    Reminds me of the 3GS or the iPad 2. Its CPU and GPU are far overpowered compared to the underlying requirements of the display provided. In this way, they are set up for a future, higher resolution, better display where a more minor leap will progress them forward into a new product number (ie., iPhone 6).

    I imagine it will last as long as the 3GS and iPad 2, too. Those who bought an iPad 2 got an impressive lifespan for their product. Too bad Apple looks to make the iPhone 5 and iPad 3rd gen go bust far more quickly or people might think Apple products had a good long lifespan.

    Also, kinda sad that Android is still so far ahead of iOS in all the ways that really matter in the here and now.
    Reply
  • systemsonchip4 - Saturday, September 21, 2013 - link

    Android is only ahead in marketshare, because Android is cheap, not that its great. iOS has Android and its manufacturers beat in just about almost every metric(customer satisfaction rating, most durable products, most loyal user base, etc...) Reply
  • Abhip30 - Tuesday, September 24, 2013 - link

    lol. Its like saying a corolla has more marketshare then a mercedes. :P Reply
  • nedjinski - Friday, September 20, 2013 - link

    and then there are the realities that nobody seems to care about -

    http://www.wired.com/opinion/2013/09/ifixit-teardo...
    Reply
  • iannoisrk - Friday, September 20, 2013 - link

    Question on the geekbenchmark. Was it 32 bit code running on 64 bit isa or 64 bit code running? Most apps will probably run 32 bit code. Wonder what the numbers will look like for them. Reply
  • ka27orl - Friday, September 20, 2013 - link

    can you do a review on BB10 devices please, e.g. Z30. I heard it beaten all quad core android phones in browsermark and performance tests. Reply
  • Harry_Wild - Friday, September 20, 2013 - link

    I was very tempted to get the 5S but I knew Apple would not go all out on the A7 chip from previous iPhones. And now I am proven right! It only has 1GB RAM.

    I will wait patiently for the iPhone 6 and re-evaluate the phone market in mid-summer! I really like the gold color too!
    Reply
  • systemsonchip4 - Friday, September 20, 2013 - link

    So the iPhone 5S has 2 Cortex ARM-A57 cores clocked at 1.3 ghz roughly ... Amazing, thats why its able to beat out the S800 SoC Reply
  • Abhip30 - Tuesday, September 24, 2013 - link

    It's not cortex arm-a57. Since A6 apple uses arm achitecture. A6 was based on armv7 and A7 is custom design armv8. Reply
  • systemsonchip4 - Friday, September 20, 2013 - link

    First consumer device to have ARM A57 processor Reply
  • tipoo - Friday, September 20, 2013 - link

    It's a custom core, not A57 or anything else from standard ARM designs. Reply
  • systemsonchip4 - Saturday, September 21, 2013 - link

    Its a ARMv8 implementation, so yes it may be a little different then a cortex a57 SoC but it is still a ARMv8 Soc and that is why the A7 is able to beat the s800 SoC clocked at 2.3 ghz Reply
  • stevesous - Friday, September 20, 2013 - link

    Every year, they say we will see that in next year's model,
    When will you guys finally get it?
    Reply
  • yhselp - Saturday, September 21, 2013 - link

    "Interestingly enough, I never really got any scratches on the back of my 5 - it’s the chamfers that took the biggest beating."

    "If you're considering one of these cases you might want to opt for a darker color as the edges of my case started to wear from constantly pulling the phone out of my pockets"

    Hmm...
    Reply
  • darkich - Sunday, September 22, 2013 - link

    Alright, I'll make a bottom line of this review.. I accused Anand of being Apple biased, now I take that back.
    He is simply and clearly an INTEL fanboy, even while believing in his utmost objectivity.
    He just can't help it.
    Then again, when you think of the decades of omnopotent Intel influence he was growing up with, in a way, that bias becomes only natural and forgiving.

    This is my message to you, Anand - Apple A7X will open your eyes real soon.
    Even you won't be able to overlook the ridiculous magnitude of superiority of that SoC to your Bay Trail.
    Mark these words.

    Take care, Darko
    Reply
  • yhselp - Monday, September 23, 2013 - link

    It's not a matter of whether the A7/A7X is faster than a given Bay Trail variant, or at all. The fact of the matter is that Intel is sitting on some truly spectacular architectural IP and that's a scientific fact; the thing is that they can't seem to get it out in time. Bay Trail is but a 'baby', exceptionally conservative architecture whereas A7 or 'Cyclone' is not -- above all else it's wider.

    Apple/ARM is better or as-good this round and might continues to be in the future if Intel doesn't speed up it's game. That's true. However, even Intel's smaller architectures ARE superior to A7/ARM, let alone their big Core stuff (which isn't far from being synthesized for smartphone use); not to mention their manufacturing process advantage.
    Reply
  • talg - Sunday, September 22, 2013 - link

    Do you know from SoC point of view what function does A7 have ? Reply
  • justacousin - Sunday, September 22, 2013 - link

    Based on some of my reading Samsung is the manufacturer of the A7 chip, what is to be said about this? Reply
  • robbie rob - Sunday, September 22, 2013 - link

    @justacousin

    Not sure whats to be said. Samsung didn't design the A7, but unfortunately for ANY company in the USA its cheaper to have most things made in asia even though they aren't designed there. Samsung fabricates many types chips and ram in its plants that it doesn't necessarily design. Unfortunately for American's this is why may products like the Xbox are made in China.
    Reply
  • Abhip30 - Tuesday, September 24, 2013 - link

    Samsung just makes them for apple.They are actually glorified foxconn. Apple provides them blueprints and samsung manufactures it. They just follow apple's instructions. Reply
  • Origin64 - Monday, September 23, 2013 - link

    Still no HD-Ready resolution (in 2013, really?) but we have a fingerprint scanner. A shame it's hackable and fingerprints aren't safe in general, where just a few weeks ago I read about a new identification technique that made use of an infrared scan of blood vessels in your face. More unique, harder to copy. Not that that'd be good to have, the NSA will still get their fingers on those biometrics. Reply
  • darkcrayon - Monday, September 23, 2013 - link

    Going to 720p on a 4" phone wouldn't make much difference. Reply
  • robbie rob - Monday, September 23, 2013 - link

    Fingerprint technology is in its infancy in consumer products. Any hacking of the fingerprint scan helps Apple and the industry. Apple will be able to patch vulnerabilities found by the best of the best. My thoughts are.. Overall, no one wants my fingerprints or yours. For the millions of people out there who have an iPhone most aren't worth the work or time. To me that means I'm just fine using it to log into a phone or make a purchase on iTunes. The truth is it would be easier and more likely for someone to break into your bank account online. No one needs a fingerprint to do that. Reply
  • Promptneutron - Monday, September 23, 2013 - link

    Another comprehensive, detailed but readable review. Anand, you produce (by some margin) the finest tech reviews on the web. Even my wife (who is a tech vacuum) read this and wants an iphone 5s..and she's not alone..;)...thank you and top work (again). Reply
  • NerdT - Monday, September 23, 2013 - link

    All of these graphics performance comparisions (except the off-screen ones) are incorrect and absolutly miss-leading. The reason is that most of the other phones have a 1080p display which has 2.8x higher resolution that iPhone 5s! That being said, all on-screen scores will get bumped up by about the same scale for iPhone because they are calculated based on FPS only, and the frames are render the the device resolution. This is a wrong benchmarking because you are not having an apple to apple comparision. I would have expected a much higher quality report from Anandtech! Please go ahead and correct your report and prevent miss-leading information. Reply
  • darkcrayon - Monday, September 23, 2013 - link

    As you even said, both onscreen and offscreen tests were shown, and the resolution difference was noted. They even have the iPhone 5 in the tests for the truest "apple to apple" comparison possible. I think you're grasping at straws here. Reply
  • robbie rob - Monday, September 23, 2013 - link

    "off screen" resolutions FPS was shown .. Reply
  • AEdouard - Monday, September 30, 2013 - link

    Hey NerdT. For a nerd, you sure don't know how to interpret charts. What do you think the offscreen tests are for? It's to eliminate the effect of display resolution. In those tests, the iPhone performed better, generally, then all other phones. The only processors that beat it where SOCs put inside tablets (where their performance can be increased).

    And beyond, that, isn't the main point to be able to see how the phone will perform in real life, which is why tests at the phone's resolution matter too.
    Reply
  • qristheone - Monday, September 23, 2013 - link

    If that was the case why can moto x be made in america and still make a profit? clearly apple is just raping people. Reply
  • robbie rob - Monday, September 23, 2013 - link

    I don't know where you're getting your info, but in the last year only TWO handset makers made a profit: Samsung and Apple. All others broke even or lost money. Broke even means made money, but by the time you pay everyone plus cost of manufacturing you didn't lose money - but you didn't bank any either.

    http://www.neowin.net/news/analysts-apple-and-sams...
    Reply
  • qristheone - Monday, September 23, 2013 - link

    the real probelm with these test is ios 7 has open gl 3.0 and android has it only on 4.3. most of these phones do not have android 4.3 infact i doubt that any of these phones tested had 4.3 when running it.

    for those who dont know open gl is OpenGL (Open Graphics Library) is a cross-language, multi-platform application programming interface (API) for rendering 2D and 3D computer graphics. The API is typically used to interact with a Graphics processing unit (GPU), to achieve hardware-accelerated rendering.
    Reply
  • NekoTipcat - Saturday, November 30, 2013 - link

    Well yes iOS supports oGL ES 3.0 but only the iPhone 5s's gpu supports it
    So the Real "Problem" resides on iOS 7 and iDevices as well
    Reply
  • whatsa - Tuesday, September 24, 2013 - link

    Nice
    As its all about apps - lol
    it would be more interesting to see the performance gain there
    as it will be many years before you see a native 64bit majority in the store.
    even though the tests bode well for the future everyday usage it will be the
    more generic apps that define its performance "today"
    Reply
  • lhlan - Wednesday, September 25, 2013 - link

    Section on A7 dual core vs quadcore design: you emphasized the A7's power efficiency advantage (lack of proper power gating on quad-core parts), as well as performance neutral factor - two cores at full speed is not slower than quad-core at so-so speed! A reference to the CPU section of MotoX review was made to back-up this point.

    Closer investigation of the MotoX review shows different picture: while performance can be comparable at best, the argument on power efficiency is so in favour of quad-core! It says running two cores at full high speed requires "ton of voltage", while running four core at 1.2GHz doesn't need that much power, hence more power efficiency.

    In the end, do we have empirical evidence as to which design (two vs. four) saves more power?
    Reply
  • 128bit - Thursday, September 26, 2013 - link

    First time knows that IPhone 5s comes with 1570 MAh battery.

    Anand ur the best keep the good job
    Reply
  • rogekk - Thursday, September 26, 2013 - link

    may pick myself up one after reading this review coupled with is african view about the iphone

    http://techjaja.com/the-reviews/iphone-5s-review/2...
    Reply
  • newandroidfan - Saturday, September 28, 2013 - link

    Tired of reading big ass reviews? Never get the info you need the most? Read the full review of the iPhone 5s and iPhone 5c with Full phone specifications only here http://goo.gl/QrGSPo Reply
  • anxyandy - Monday, September 30, 2013 - link

    Hmm! If this beauty is as good as it looks here, I'm afraid I won't even be considering the iPhone 5S! http://versus.com/en/sony-xperia-z1-mini-vs-apple-...
    Xperia Z1 Mini - same(ish) size, excellent features and hardware!
    Reply
  • vampyren - Wednesday, October 02, 2013 - link

    Great review, Sadly i feel so limited in iOS these days and also the screen size is just to small for my taste. I use Galaxy S4 and i'm really happy with it. my iPhone5 is still performing well as well but i use it less and less these days, hopefully Apple will make a larger screen phone next time. That might make me more willing to use iphone but not with 4inch. Reply
  • AEdouard - Sunday, October 06, 2013 - link

    I'm sure they'll be offering a bigger option in 2014, for the regular iPhone or a new ''large'' iPhone model alongside the regular 4 inch option. Reply
  • katherine0james - Wednesday, October 02, 2013 - link

    my parents in-law recently got an awesome red Lincoln MKS Sedan just by part time work online. site here...>.. http://CuttR.it/tvtmbce Reply
  • Samwise - Wednesday, October 02, 2013 - link

    Anandtech, please review the Droid MAXX. Reply
  • alison_lenihan - Thursday, October 03, 2013 - link

    what Eric said I am shocked that some people can profit $4550 in one month on the computer. see post....... CuttR.it/tvtmbce Reply
  • Duck <(' ) - Thursday, October 03, 2013 - link

    Hello Anand, it seems that you actually are inflating the benchmark scores in case of apple for some unknown reasons. For ex- iPhone 5 in YouTube vids scores around 2300 while you show 2800. Check here https://www.youtube.com/watch?v=iATFnXociC4 Reply
  • Duck <(' ) - Thursday, October 03, 2013 - link

    You people are complete LIARS !!! Will never visit your site again. You have been bribed by apple but your website will suffer loss in reputation. Reply
  • varase - Thursday, October 03, 2013 - link

    I wonder if any of these Android benchmarks are tainted by gamed benchmark code. Reply
  • mohammedkarou - Sunday, October 06, 2013 - link

    i have the iPhone 5s 32gb black, while , gold color for sale and i have 30 units altogether

    Contact me for more details regarding the purchase of this and i will explain better iphone4sale32 and on email at : mohammedkarou@hotmail.com
    Reply
  • Hrel - Monday, October 14, 2013 - link

    I think in the 16:9 aspect ratio a 5" screen would probably be best. I'll never buy an apple product so I guess it doesn't really matter to me. But I think a thin bezel with a 5" 1080p screen is the way to go. Reply
  • Bossrulz - Tuesday, October 22, 2013 - link

    Hi Anand. I am planning to buy my first iphone in the form of 5S.
    Is it worth to buy or to wait for iphone 6 ?
    Is it good to buy in USA or in the country where I live in ?
    Does iphone have internatiobnal warranty ?
    Reply
  • beast from the east - Wednesday, October 30, 2013 - link

    Intel only ever dominiated in sales, not processing power.

    I have installed Apple systems for 25 years, pre-Intel Macs, Apple's computers had twice the performance per clock cycle than the Intel equivalents. From the Motorola chips through to PowerPC.

    That's one of the many reasons why graphics, video and the scientific community used Macs.

    This chip is a beast, we all know it. With the best relationship in the mobile market with Developers that get paid for their work, a fantastic SDK, and Dev's talking about an hour to recompile to 64-bit. I think Apple will be alright.

    Trying to pick holes is just 'Roid-Rage, plain and simple.
    Reply
  • AngryCorgi - Thursday, November 14, 2013 - link

    The math used in this article is incorrect. It is 76.8 GFLOPS per CORE not for the entire GPU. The GPU should be capable of 307.2 GFLOPS. The rest of that chart is wrong as well in most places.

    @650MHZ, per core, G6430 = 166.4 GFLOPS, (*300/650) = 76.8 GFLOPS, (*4) = 307.2 GFLOPS
    Reply
  • ronnieryan - Saturday, January 11, 2014 - link

    @Anand : Sir could you make a review on the history of the iphone's home button? i would really want to know how tough the iPhone 5s home button. I was an android user and wanted to try something new. New in a sense of a 64 bit processor. But i want to know how strong is the 5s home button. Please do make a review of the home button, i would really want to know. email me for the link if its ok...Thanks :D Reply
  • casualphoenix - Wednesday, January 22, 2014 - link

    Hi Anand,

    Hope you're doing well today. My name is Nate Humphries and I'm the Tech/Science editor at CultureMass.com.

    I've been reading through your iPhone 5S and iPad Air articles in preparation for an article about the A7 chip, and it's been an extremely informative read. I wanted to ask if I could use your benchmark charts in my article if I provide proper citation back to your article. I think they would be very helpful for our readers.

    Let me know how that sounds, and I look forward to hearing back from you.

    Thanks,

    Nate Humphries
    Tech/Science Editor | CultureMass
    nate.humphries@culturemass.com
    Reply
  • besweeet - Sunday, March 30, 2014 - link

    I'm curious as to how this website did their 4G LTE tests... On AT&T, I could probably achieve those numbers. Swap that SIM out for one from T-Mobile, and regardless of signal strength, numbers would dramatically decrease instantly. Reply

Log in

Don't have an account? Sign up now