Final Words

The iPhone 5s is quite possibly the biggest S-update we've ever seen from Apple. I remember walking out of the venue during Apple's iPhone 5 launch and being blown away by the level of innovation, at the platform/silicon level, that Apple crammed into the iPhone 5. What got me last time was that Apple built their own ARM based CPU architecture from the ground up, while I understand that doesn't matter for the majority of consumers - it's no less of an achievement in my eyes. At the same time I remember reading through a sea of disappointment on Twitter - users hoping for more from Apple with the iPhone 5. If you fell into that group last time, there's no way you're going to be impressed by the iPhone 5s. For me however, there's quite a bit to be excited about.

The A7 SoC is seriously impressive. Apple calls it a desktop-class SoC, but I'd rather refer to it as something capable of competing with the best Intel has to offer in this market. In many cases the A7's dual cores were competitive with Intel's recently announced Bay Trail SoC. Web browsing is ultimately where I noticed the A7's performance the most. As long as I was on a good internet connection, web pages just appeared after resolving DNS. The A7's GPU performance is also insanely good - more than enough for anything you could possibly throw at the iPhone 5s today, and fast enough to help keep this device feeling quick for a while.

Apple's move to 64-bit proves it is not only committed to supporting its own microarchitectures in the mobile space, but also that it is being a good steward of the platform. Just like AMD had to do in the mid-2000s, Apple must plan ahead for the future of iOS and that's exactly what it has done. The immediate upsides to moving to 64-bit today are increased performance across the board as well as some huge potential performance gains in certain FP and cryptographic workloads.

The new camera is an evolutionary but much appreciated step forward compared to the iPhone 5. Low light performance is undoubtedly better, and Apple presents its users with an interesting balance of spatial resolution and low light sensitivity. The HTC One seemed to be a very polarizing device for those users who wanted more resolution and not just great low light performance - with the 5s Apple attempts to strike a more conservative balance. The 5s also benefits from the iOS's excellent auto mode, which seems to do quite well for novice photographers. I would love to see full manual control exposed in the camera UI, but Apple's auto mode seems to be quite good for those who don't want to mess with settings. The A7's improved ISP means things like HDR captures are significantly quicker than they were on even the iPhone 5. Shot to shot latency is also incredibly low.

Apple's Touch ID was the biggest surprise for me. I found it very well executed and a nice part of the overall experience. When between the 5s and the 5/5c, I immediately miss Touch ID. Apple is still a bit too conservative with where it allows Touch ID instead of a passcode, but even just as a way to unlock the device and avoid typing in my iCloud password when downloading apps it's a real improvement. I originally expected Touch ID to be very gimmicky, but now I'm thinking this actually may be a feature we see used far more frequently on other platforms as well.

The 5s builds upon the same chassis as the iPhone 5 and with that comes a number of tradeoffs. I still love the chassis, design and build quality - I just wish it had a larger display. While I don't believe the world needs to embrace 6-inch displays, I do feel there is room for another sweet spot above 4-inches. For me personally, Motorola has come the closest with the Moto X and I would love to see what Apple does with a larger chassis. The iPhone has always been a remarkably power efficient platform, a larger chassis wouldn't only give it a bigger, more usable screen but also a much larger battery to boot. I'm not saying that replacing the 4-inch 5s chassis is the only option, I'd be fine with a third model sitting above it in screen size/battery capacity similar to how there are both 13 and 15-inch MacBook Pros.

The lack of 802.11ac and LTE-A support also bother me as the 5s is so ahead of the curve elsewhere in silicon. There's not much I can see to either point other than it's obvious that both will be present in next year's model, and for some they may be features worth waiting for.

At the end of the day, if you prefer iOS for your smartphone - the iPhone 5s won't disappoint. In many ways it's an evolutionary improvement over the iPhone 5, but in others it is a significant step forward. What Apple's silicon teams have been doing for these past couple of years has really started to pay off. From a CPU and GPU standpoint, the 5s is probably the most futureproof of any iPhone ever launched. As much as it pains me to use the word futureproof, if you are one of those people who likes to hold onto their device for a while - the 5s is as good a starting point as any.

Display, Cellular & WiFi
Comments Locked

464 Comments

View All Comments

  • MatthiasP - Tuesday, September 17, 2013 - link

    Wow, first real review on the web AND deep as always, a very nice job from Anand. :)
  • sfaerew - Wednesday, September 18, 2013 - link

    Benchmarks(GFXBench 2.7,3DMark.Basemark X.etc.) are AArch64 version?
    There are 30~40% performance gap between v32geekbench and v64geekbench.
    INT(ST)1471 vs 1065.
    FP(ST)1339 vs 983
  • Wilco1 - Wednesday, September 18, 2013 - link

    And Bay Trail Geekbench at 2.4GHz: 1063 (INT), 866 (FP)

    So A7 has beaten BT already by a huge margin despite BT not even being for sale yet...
  • TraderHorn - Wednesday, September 18, 2013 - link

    You're comparing 64bit A7 vs 32bit BT. The 32bit #s are dead even. It'll be interesting to see if BT gets a similar performance boost when Win8 64bit versions are released in 1h 2014.
  • Wilco1 - Wednesday, September 18, 2013 - link

    BT's 32-bit result includes hardware accelerated AES, which skews its score (without it, its score is ~936). The 64-bit A7 result does also use hardware acceleration, so it is more comparable.

    Yes BT will get a speedup from 64-bit as well, but won't be nearly as much as A7 gets: its 32-bit result already has the AES acceleration, and x64 nearly isn't as different from x86 as A64 is from A32.

    However the interesting things is that not even in 32-bit A7 wins by a good margin, but that it wins despite running at almost half the frequency of Bay Trail... Forget about Bay Trail, this is Haswell territory - the MacBook Air with the 15W 3.3GHz i7-4650U scores 3024 INT and 3003 FP.

    Now imagine a quad core tablet/laptop version of the A7 running at 2GHz on TSMC 20nm next year.
  • smartypnt4 - Wednesday, September 18, 2013 - link

    Why does the frequency matter? If the TDP of the chips are similar (Bay Trail was tested and verified by Anand as using 2.5W at the SoC level under load), who gives a flip about the frequency?

    If Apple wanted to double the frequency of the chip, they'd need something on the order of 4x the amount of power it already consumes (assuming a back-of-the-napkin quadratic relationship, which is approximately correct), putting it at ~6-8W or so at full load. That's assuming such a scaling could even be done, which is unlikely given that Apple built the thing to run at 1.3GHz max. You can't just say "oh, I want these to switch faster, so let's up the voltage." There's more that goes in to the ability to scale voltage than just the process node you're on.

    Now, I will agree that this does prove that if Apple really wanted to, they could build something to compete with Haswell in terms of raw throughput. Next year's A8 or whatever probably will compete directly with Haswell in raw theoretical integer and FP throughput, if Apple manages to double performance again. That's not a given since they had to use ~50% more transistors to get a performance doubling from the A6 to the A7, and building a 1.5B transistor chip is nontrivial since yields are inversely proportional to the number of transistors you're using.

    Next year will be really interesting, though. What with Apple's next stuff, Broadwell, the first A57 designs, Airmont, and whatever Qualcomm puts out (haven't seen anything on that, which is odd for Qualcomm.)
  • Wilco1 - Wednesday, September 18, 2013 - link

    Frequency & process matters. Current phones use about 2W at max load without the screen (see recent Nexus 7 test), so the claimed 2.5W just for BT is way too much for a phone. That means (as you explained) it must run at a lower frequency and voltage to get into phones - my guess we won't see anything faster than the Z3740 with a max clock of 1.8GHz. Therefore the A7 will extend its lead even further.

    According to TSMC 20nm will give a 30% frequency boost at the same power. So I'd expect that a 2GHz A7 would be possible on 20nm using only 35% more power. That means the A7 would get 75% more performance at a small cost in power consumption. This is without adding any extra transistors.

    Add some tweaks (like faster memory) and such a 2GHz A7 would be similar in performance as the 15W Haswell in MacBook Air. So my point is that with a die shrink and a slight increase in power they already have a Haswell competitor.
  • smartypnt4 - Wednesday, September 18, 2013 - link

    Frequency and process matter in that they affect power consumption. If Intel can get Bay Trail to do 2.4GHz on something like 1.0V, then the power should be fine. Current Haswell stuff tops out its voltage around 1.1V or so in laptops (if memory serves), so that's not unreasonable.

    All of this assumes Geekbench is valid for comparing HSW on Win8 to ARMv8/Cyclone on iOS, which I have serious reservations about attempting to do.

    The other issue I have is this: you're talking about a 50% clock boost giving a 100% increase in performance if we look at the Geekbench scores. That's simply not possible. Had you said "raise the clock to 1.6-1.7GHz and give it 4 cores," I'd be right behind you in a 2x theoretical performance increase. But a 50% clock boost will never yield a 100% increase with the same core, even if you change the memory controller.

    Also, somehow your math doesn't add up for power... Are you hypothesizing that a 2GHz A7 (with 75% of the performance of Haswell 15W, not the same - as per Geekbench) can pull 2.6W while Haswell needs 15W to run that test? Granted, Haswell integrates things that the A7 doesn't. Namely, more advanced I/O (PCIe, SATA, USB, etc.), and the PCH. Using very fuzzy math, you can claim all of that uses 1/2 the power of the chip.

    That brings Haswell's power for compute down to 7-8W, more or less. And you're going to tell me that Apple has figured out how to get 75% of the performance of a 7W part in 2.6W, and Intel hasn't? Both companies have ~100k employees. One is working on a ton of different stuff, and one makes processors, basically exclusively (SSDs and WiFi stuff too, but processors is their main drive). You're telling me that a (relatively) small cadre of guys at Apple have figured out how to do it, and Intel hasn't done it yet on a part that costs ~6x as much after trying to get deep into the mobile space for years. I find that very hard to believe.

    Even with the 14nm shrink next year, you're talking about a 30% power savings for Intel's stuff. That brings the 15W total down to 10.5W, and the (again, super, ridiculously fuzzy) computing power to ~5-6W. On a full node smaller than what Apple has access to. And you're saying they'd hypothetically compete in throughput with a 2.6W part. I'm not sure I believe that.

    Then again, I suppose theoretical bandwidth could be competitive. That's simply a factor of your peak IPC, not your average IPC while the device is running. I don't know enough about the low level architecture of the A7 (no one does), so I'll just leave it here I guess.

    I'm gonna go now... I'm starting to reason in circles.
  • Wilco1 - Wednesday, September 18, 2013 - link

    The sort of "simple" tweaks I was thinking of are: an improved memory controller and prefetcher, doubling of L2, larger branch predictor tables. Assuming a 30% gain due to those tweaks, the result is a 100% speedup at 2GHz (1.3 to 2.0 GHz is a 54% speedup, so you get 1.54 * 1.3 = 2.0x perf). The 30% gain due to tweaks is pure speculation of course, however NVidia claims 15-30% IPC gain for similar tweaks in Tegra 4i, so it's not entirely implausible. As you say a much simpler alternative would be just to double the cores, but then your single threaded performance is still well below that of Haswell.

    You can certainly argue some reduction in the 15W TDP of Haswell due to IO, however with Turbo it will try to use most of that 15W if it can (the Air goes up to 3.3GHz after all).

    Yes I am saying that a relative newcomer like Apple can compete with Intel. Intel may be large, but they are not infallible, after all they made the P4, Itanium and Atom. A key reason AMD cited for moving into ARM servers was that designing an ARM CPU takes far less effort than an equivalent performing x86 one. So the ISA does still matter despite some claiming it no longer does.
  • smartypnt4 - Wednesday, September 18, 2013 - link

    My point wasn't that Apple can't compete; far from it. If anything, the A7 shows they can compete for the most part. However, what you suggest is that Apple could theoretically have the same performance as Intel on a full node process larger at half the power. I

    have no illusions that Intel is infallible. Stuff like Larrabee and the underwhelming GPU in Bay Trail prove that they aren't. I just seriously doubt that Apple could beat Intel at its own game. Specifically, in CPU performance, which is an area it's dominated for years. It's possible, but I find it relatively unlikely, especially this early in Apple's lifetime as a chip designer.

    On a different note, after looking at the Geekbench results more, I feel like it's improperly weighted. The massive performance improvement in AES and SHA encryption may be skewing the overall result... I need to dig more in to Geekbench before coming to an actual conclusion. I'm also still not convinced that comparing cross-platform results is actually valid. I'd like to believe it is, but I've always had reservations about it.

Log in

Don't have an account? Sign up now