Original Link: https://www.anandtech.com/show/7335/the-iphone-5s-review



For much of the iPhone's life Apple has enjoyed a first-mover advantage. At the launch of the first iPhone, Steve Jobs expected the device and OS would give it a multi-year head start over the competition. Indeed that's how the market played out. Although Android was met with some early success, it wasn't until well after the launch of the first Android devices that we started seeing broad, mainstream acceptance of the platform. The iPhone bought Apple time, and together with the iPad it brought Apple a tremendous amount of profit over the years. The trick of course is turning a first-mover advantage into an indefinitely dominant market position, a difficult task when you're only making one device a year.

Today we find Apple in a very different position. The iPhone is still loved by a very loyal customer base, but the competition is much stronger than it was back in 2007. The modern smartphone market has also evolved. When Apple introduced the original iPhone with its 3.5" display, Steve called it "giant" on stage. Today even HTC's One mini ships with a 4.3" display.

Last year we saw Apple begin to address the changing landscape with the iPhone 5. The 5 saw Apple moving to a thinner, lighter chassis with much better internals and a significantly larger display. While there is market demand for Apple to do the same again, and move to an even larger display, there are some traditions Apple is sticking to. In this case, it's the tradition of the S-update.

The iPhone 5s continues Apple’s tradition of introducing a performance focused upgrade for the last year of any new chassis design. The first time we encountered an S-update was with the 3GS, which took the iPhone away from its sluggish ARM11 roots and into the world of the Cortex A8.

The next S-upgrade came with the iPhone 4S: Apple’s first smartphone to use a dual-core SoC. At the time I remember debate over whether or not a performance upgrade alone was enough to sell a new device, especially one that didn’t look any different. I’m pretty much never happy with the performance I have, so I eagerly welcomed the new platform. Looking back at the iPhone 4 vs. 4S today, I’d say the investment was probably worth it. In preparation for this review I threw iOS 7 on every iPhone that would support it, dating back to the iPhone 4. In my experience, the 4 is a bit too slow running iOS 7 - the 4S really should be the minimum requirement from a performance standpoint.

That brings us to the iPhone 5s, the third in a list of S-upgrades to the iPhone platform. Like the S-devices that came before it, the iPhone 5s is left in the unfortunate position of not being able to significantly differentiate itself visually from its predecessor. This time around Apple has tried to make things a bit better by offering the 5s in new finishes. While the iPhone 5 launched in silver and black options, the 5s retains silver, replaces black with a new space grey and adds a third, gold finish.


old black iPhone 5 (left) vs. new space grey iPhone 5s (right)

I was sampled a space grey iPhone 5s, which worked out well given my iPhone 5 was black. The new space grey finish is lighter in color (truly a grey rather than a black) and has more prominently colored chamfers. The move to a lighter color is likely to not only offer a little bit of visual differentiation, but also to minimize the appearance of scuffs/scratches on the device. My black iPhone 5 held up reasonably well considering I carry it without a case, but there’s no denying the fact that it looks aged. Interestingly enough, I never really got any scratches on the back of my 5 - it’s the chamfers that took the biggest beating. I have a feeling the new space grey finish will hold up a lot better in that regard as well.

The addition of a gold option is an interesting choice. Brian and I saw the gold iPhone up close at Apple’s Town Hall event and it really doesn’t look bad at all. It’s a very subtle gold finish rather than a gaudy gold brick effect. I think gold is likely the phone I’d opt for simply because it’d be very different than everything else I have, but otherwise space grey is probably the best looking of the three devices to me.

Along with the new finishes come new leather cases to protect the 5s. These cases are designed and sold by Apple, and they are backwards compatible with the iPhone 5 as well. Apple calls them leather cases but I'm not entirely sure if we're talking about real leather here or something synthetic. Either way, the new cases feel great. They've got a very smooth, soft texture to them, and are lined with a suede like material.

The new cases don't add a tremendous amount of bulk to the device either. The cases are available in 5 different colors and retail for $39:

I was sampled a beige case and have been using it non-stop for the past week. I really like the case a lot and it did a great job protecting the 5s over the past week while I was traveling. I took all of the photos of the review device after I returned home from traveling, but thanks to the case the device still looked as good as new. If you're considering one of these cases you might want to opt for a darker color as the edges of my case started to wear from constantly pulling the phone out of my pockets:

If you're fine with the distressed leather look then it's not a concern, but if you're hoping to keep your case pristine you may want to look at other cases. If you want a more affordable & more rugged option, Brian turned me on to the Magpul Field case which should work perfectly with the iPhone 5s.

Since the 5s is an S-upgrade, the chassis remains unchanged compared to the iPhone 5. The 5s’ dimensions are identical to that of the iPhone 5, down to the last millimeter of size and gram of weight. Construction, build quality and in-hand feel continue to be excellent for the iPhone 5s. Despite the diet the iPhone went on last year, the 5/5s chassis is still substantial enough to feel like a quality product. I remember criticisms of the iPhone 5 at launch, saying that it felt too light. Now going back and holding an iPhone 4S, it feels like the very opposite is true - the 4S was too heavy

The iPhone 5s design remains one of the most compact flagship smartphones available. The move to a 4-inch display last year was very necessary, but some will undoubtedly be disappointed by the lack of any further progress on the screen dimension front. A larger display obviously wasn’t in the cards this generation, but I have a strong suspicion Apple has already reconsidered its position on building an even larger iPhone. Part of the problem is the iPhone’s usable display area is very much governed by the physical home button and large earpiece/camera area at the top of the device. Building a larger iPhone that isn’t unwieldy likely requires revisiting both of these design decisions. It’s just too tall of an order for a refresh on the same chassis.

Brian often talks about smartphone size very much being a personal preference, and for many the iPhone 5 continues to be a good target. If you fall into that category, the 5s obviously won’t disappoint. Personally, I would’ve appreciated something a bit larger that made better use of the front facing real estate. The 5s’ width is almost perfect for my hands. I could deal with the device being a little larger, with the ideal size for me landing somewhere between the iPhone 5 and the Moto X.

It remains to be seen the impact display size has on iPhone sales. Anecdotally I know a number of die hard iPhone users who simply want a larger display and are willing to consider Android as a result. I still believe that users don’t really cross shop between Android and iOS, but if Apple doesn’t offer a larger display option soon then I believe it will lose some users not because of cross shopping, but out of frustration.

As a refreshed design, the iPhone 5s carries over all of the innovations we saw in the 5 last year. The iPhone 5s features the same Lightning connector that debuted on the iPhone 5, and has since been extended to the iPad lineup as well as the new iPods.

As with all other S-upgrades, the biggest changes to the iPhone 5s are beneath the aluminum and glass exterior. The 5s’ flagship feature? Apple’s new A7 SoC. The A7 is the world's first 64-bit smartphone SoC, and the first 64-bit mobile SoC shipping in a product (Intel’s Bay Trail is 64-bit but it won’t ship as such, and has yet to ship regardless). In addition to the new 64-bit SoC Apple upgraded both cameras in the iPhone 5s and added a brand new fingerprint sensor called Touch ID. Of course the iPhone 5s is one of the first new iPhones to ship with iOS 7 from the factory.

  Apple iPhone 5 Apple iPhone 5c Apple iPhone 5s
SoC Apple A6 Apple A6 Apple A7
Display 4-inch 1136 x 640 LCD sRGB coverage with in-cell touch
RAM 1GB LPDDR2 1GB LPDDR3
WiFi 2.4/5GHz 802.11a/b/g/n, BT 4.0
Storage 16GB/32GB/64GB 16GB/32GB 16GB/32GB/64GB
I/O Lightning connector, 3.5mm headphone
Current OS iOS 7
Battery 1440 mAh, 3.8V, 5.45 Whr 1507 mAh, 3.8V, 5.73 Whr 1570 mAh, 3.8V, 5.96 Whr
Size / Mass 123.8 x 58.6 x 7.6 mm, 112 grams 124.4 x 59.2 x 8.97 mm, 132 grams 123.8 x 58.6 x 7.6 mm, 112 grams
Camera 8MP iSight with 1.4µm pixels Rear Facing
1.2MP with 1.75µm pixels Front Facing
8MP iSight with 1.4µm pixels Rear Facing
1.2MP with 1.9µm pixels Front Facing
8MP iSight with 1.5µm pixels Rear Facing + True Tone Flash
1.2MP with 1.9µm pixels Front Facing
Price $199 (16GB), $299 (32GB), $399 (64GB) on 2 year contract $99 (16GB), $199 (32GB) on 2 year contract $199 (16GB), $299 (32GB), $399 (64GB) on 2 year contract

The iPhone 5s also breaks with tradition in a couple of ways. The 5s is the first iPhone in recent history to not be offered up for pre-order. Apple expects demand for the iPhone 5s to severely outstrip supply, and as a result won't be accepting pre-orders on the 5s.

The other big change is what happens to the previous generation iPhone. In the past, Apple would discount the previous generation iPhone by $100 on-contract and continue to sell those devices at low capacity points. A two-generation old iPhone was often offered for free on-contract as well. This time, the iPhone 5s replaces the iPhone 5 at the high end, but the iPhone 5 ceases production. Instead, the 5 is replaced with a cost reduced version (the iPhone 5c). As the glass & aluminum iPhone 5/5s chassis likely doesn't scale well in price, coming up with a new polycarbonate design for slightly lower price points makes sense. I have written a separate piece on the iPhone 5c as I have more than enough to talk about with the iPhone 5s in this review.

I'll start with the big ticket item: Apple's 64-bit A7 SoC.



A7 SoC Explained

I’m still surprised by the amount of confusion around Apple’s CPU cores, so that’s where I’ll start. I’ve already outlined how ARM’s business model works, but in short there are two basic types of licenses ARM will bestow upon its partners: processor and architecture. The former involves implementing an ARM designed CPU core, while the latter is the creation of an ARM ISA (Instruction Set Architecture) compatible CPU core.

NVIDIA and Samsung, up to this point, have gone the processor license route. They take ARM designed cores (e.g. Cortex A9, Cortex A15, Cortex A7) and integrate them into custom SoCs. In NVIDIA’s case the CPU cores are paired with NVIDIA’s own GPU, while Samsung licenses GPU designs from ARM and Imagination Technologies. Apple previously leveraged its ARM processor license as well. Until last year’s A6 SoC, all Apple SoCs leveraged CPU cores designed by and licensed from ARM.

With the A6 SoC however, Apple joined the ranks of Qualcomm with leveraging an ARM architecture license. At the heart of the A6 were a pair of Apple designed CPU cores that implemented the ARMv7-A ISA. I came to know these cores by their leaked codename: Swift.

At its introduction, Swift proved to be one of the best designs on the market. An excellent combination of performance and power consumption, the Swift based A6 SoC improved power efficiency over the previous Cortex A9 based design. Swift also proved to be competitive with the best from Qualcomm at the time. Since then however, Qualcomm has released two evolutions of its CPU core (Krait 300 and Krait 400), and pretty much regained performance leadership over Apple. Being on a yearly release cadence, this is Apple’s only attempt to take back the crown for the next 12 months.

Following tradition, Apple replaces its A6 SoC with a new generation: A7.

With only a week to test battery life, performance, wireless and cameras on two phones, in addition to actually using them as intended, there wasn’t a ton of time to go ridiculously deep into the new SoC’s architecture. Here’s what I’ve been able to piece together thus far.

First off, based on conversations with as many people in the know as possible, as well as just making an educated guess, it’s probably pretty safe to say that the A7 SoC is built on Samsung’s 28nm HK+MG process. It’s too early for 20nm at reasonable yields, and Apple isn’t ready to move some (not all) of its operations to TSMC.

The jump from 32nm to 28nm results in peak theoretical scaling of 76.5% (the same design on 28nm can be no smaller than 76.5% of the die area at 32nm). In reality, nothing ever scales perfectly so we’re probably talking about 80 - 85% tops. Either way that’s a good amount of room for new features.

At its launch event Apple officially announced both die size for the A7 (102mm^2) as well as transistor count (over 1 billion). Don’t underestimate the magnitude of both of these disclosures. The technical folks at Cupertino are clearly winning some battle to talk more about their designs and not less. We’re not yet at the point where I’m getting pretty diagrams and a deep dive, but it’s clear that Apple is beginning to open up more (and it’s awesome).

Apple has never previously disclosed transistor count. I also don’t know if this “over 1 billion” figure is based on a schematic or layout transistor count. The only additional detail I have is that Apple is claiming a near doubling of transistors compared to the A6. Looking at die sizes and taking into account scaling from the process node shift, there’s clearly a more fundamental change to the chip’s design. It is possible to optimize a design (and transistors) for area, which seems to be what has happened here.

The CPU cores are, once again, a custom design by Apple. These aren’t Cortex A57 derivatives (still too early for that), but rather some evolution of Apple’s own Swift architecture. I’ll dive into specifics of what I’ve been able to find in a moment. To answer the first question on everyone’s mind, I believe there are two of these cores on the A7. Before I explain how I arrived at this conclusion, let’s first talk about cores and clock speeds.

I always thought the transition from 2 to 4 cores happened quicker in mobile than I had expected. Thankfully there are some well threaded apps that have been able to take advantage of more than two cores and power gating keeps the negative impact of the additional cores down to a minimum. As we saw in our Moto X review however, two faster cores are still better for most uses than four cores running at lower frequencies. NVIDIA forced everyone’s hand in moving to 4 cores earlier than they would’ve liked, and now you pretty much can’t get away with shipping anything less than that in an Android handset. Even Motorola felt necessary to obfuscate core count with its X8 mobile computing system. Markets like China seem to also demand more cores over better ones, which is why we see such a proliferation of quad-core Cortex A5/A7 designs. Apple has traditionally been sensible in this regard, even dating back to core count decisions in its Macs. I remembering reviewing an old iMac and pitting it against a Dell XPS One at the time. This was in the pre-power gating/turbo days. Dell went the route of more cores, while Apple chose for fewer, faster ones. It also put the CPU savings into a better GPU. You can guess which system ended out ahead.

In such a thermally constrained environment, going quad-core only makes sense if you can properly power gate/turbo up when some cores are idle. I have yet to see any mobile SoC vendor (with the exception of Intel with Bay Trail) do this properly, so until we hit that point the optimal target is likely two cores. You only need to look back at the evolution of the PC to come to the same conclusion. Before the arrival of Nehalem and Lynnfield, you always had to make a tradeoff between fewer faster cores and more of them. Gaming systems (and most users) tended to opt for the former, while those doing heavy multitasking went with the latter. Once we got architectures with good turbo, the 2 vs 4 discussion became one of cost and nothing more. I expect we’ll follow the same path in mobile.

Then there’s the frequency discussion. Brian and I have long been hinting at the sort of ridiculous frequency/voltage combinations mobile SoC vendors have been shipping at for nothing more than marketing purposes. I remember ARM telling me the ideal target for a Cortex A15 core in a smartphone was 1.2GHz. Samsung’s Exynos 5410 stuck four Cortex A15s in a phone with a max clock of 1.6GHz. The 5420 increases that to 1.7GHz. The problem with frequency scaling alone is that it typically comes at the price of higher voltage. There’s a quadratic relationship between voltage and power consumption, so it’s quite possibly one of the worst ways to get more performance. Brian even tweeted an image showing the frequency/voltage curve for a high-end mobile SoC. Note the huge increase in voltage required to deliver what amounts to another 100MHz in frequency.

The combination of both of these things gives us a basis for why Apple settled on two Swift cores running at 1.3GHz in the A6, and it’s also why the A7 comes with two cores running at the same max frequency. Interestingly enough, this is the same max non-turbo frequency Intel settled at for Bay Trail. Given a faster process (and turbo), I would expect to see Apple push higher frequencies but without those things, remaining conservative makes sense. I verified frequency through a combination of reporting tools and benchmarks. While it’s possible that I’m wrong, everything I’ve run on the device (both public and not) points to a 1.3GHz max frequency.

Verifying core count is a bit easier. Many benchmarks report core count, I also have some internal tools that do the same - all agreed on the same 2 cores/2 threads conclusion. Geekbench 3 breaks out both single and multithreaded performance results. I checked with the developer to ensure that the number of threads isn’t hard coded. The benchmark queries the max number of logical CPUs before spawning that number of threads. Looking at the ratio of single to multithreaded performance on the iPhone 5s, it’s safe to say that we’re dealing with a dual-core part:

Geekbench 3 Single vs. Multithreaded Performance - Apple A7
  Integer FP
Single Threaded 1471 1339
Multi Threaded 2872 2659
A7 Advantage 1.97x 1.99x
Peak Theoretical 2C Advantage 2.00x 2.00x

Now the question is, what’s changed in these cores?

 



After Swift Comes Cyclone Oscar

I was fortunate enough to receive a tip last time that pointed me at some LLVM documentation calling out Apple’s Swift core by name. Scrubbing through those same docs, it seems like my leak has been plugged. Fortunately I came across a unique string looking at the iPhone 5s while it booted:

I can’t find any other references to Oscar online, in LLVM documentation or anywhere else of value. I also didn’t see Oscar references on prior iPhones, only on the 5s. I’d heard that this new core wasn’t called Swift, referencing just how different it was. Obviously Apple isn’t going to tell me what it’s called, so I’m going with Oscar unless someone tells me otherwise.

Oscar is a CPU core inside M7, Cyclone is the name of the Swift replacement.

Cyclone likely resembles a beefier Swift core (or at least Swift inspired) than a new design from the ground up. That means we’re likely talking about a 3-wide front end, and somewhere in the 5 - 7 range of execution ports. The design is likely also capable of out-of-order execution, given the performance levels we’ve been seeing.

Cyclone is a 64-bit ARMv8 core and not some Apple designed ISA. Cyclone manages to not only beat all other smartphone makers to ARMv8 but also key ARM server partners. I’ll talk about the whole 64-bit aspect of this next, but needless to say, this is a big deal.

The move to ARMv8 comes with some of its own performance enhancements. More registers, a cleaner ISA, improved SIMD extensions/performance as well as cryptographic acceleration are all on the menu for the new core.

Pipeline depth likely remains similar (maybe slightly longer) as frequencies haven’t gone up at all (1.3GHz). The A7 doesn’t feature support for any thermal driven CPU (or GPU) frequency boost.

The most visible change to Apple’s first ARMv8 core is a doubling of the L1 cache size: from 32KB/32KB (instruction/data) to 64KB/64KB. Along with this larger L1 cache comes an increase in access latency (from 2 clocks to 3 clocks from what I can tell), but the increase in hit rate likely makes up for the added latency. Such large L1 caches are quite common with AMD architectures, but unheard of in ultra mobile cores. A larger L1 cache will do a good job keeping the machine fed, implying a larger/more capable core.

The L2 cache remains unchanged in size at 1MB shared between both CPU cores. L2 access latency is improved tremendously with the new architecture. In some cases I measured L2 latency 1/2 that of what I saw with Swift.

The A7’s memory controller sees big improvements as well. I measured 20% lower main memory latency on the A7 compared to the A6. Branch prediction and memory prefetchers are both significantly better on the A7.

I noticed large increases in peak memory bandwidth on top of all of this. I used a combination of custom tools as well as publicly available benchmarks to confirm all of this. A quick look at Geekbench 3 (prior to the ARMv8 patch) gives a conservative estimate of memory bandwidth improvements:

Geekbench 3.0.0 Memory Bandwidth Comparison (1 thread)
  Stream Copy Stream Scale Stream Add Stream Triad
Apple A7 1.3GHz 5.24 GB/s 5.21 GB/s 5.74 GB/s 5.71 GB/s
Apple A6 1.3GHz 4.93 GB/s 3.77 GB/s 3.63 GB/s 3.62 GB/s
A7 Advantage 6% 38% 58% 57%

We see anywhere from a 6% improvement in memory bandwidth to nearly 60% running the same Stream code. I’m not entirely sure how Geekbench implemented Stream and whether or not we’re actually testing other execution paths in addition to (or instead of) memory bandwidth. One custom piece of code I used to measure memory bandwidth showed nearly a 2x increase in peak bandwidth. That may be overstating things a bit, but needless to say this new architecture has a vastly improved cache and memory interface.

Looking at low level Geekbench 3 results (again, prior to the ARMv8 patch), we get a good feel for just how much the CPU cores have improved.

Geekbench 3.0.0 Compute Performance
  Integer (ST) Integer (MT) FP (ST) FP (MT)
Apple A7 1.3GHz 1065 2095 983 1955
Apple A6 1.3GHz 750 1472 588 1165
A7 Advantage 42% 42% 67% 67%

Integer performance is up 44% on average, while floating point performance is up by 67%. Again this is without 64-bit or any other enhancements that go along with ARMv8. Memory bandwidth improves by 35% across all Geekbench tests. I confirmed with Apple that the A7 has a 64-bit wide memory interface, and we're likely talking about LPDDR3 memory this time around so there's probably some frequency uplift there as well.

The result is something Apple refers to as desktop-class CPU performance. I’ll get to evaluating those claims in a moment, but first, let’s talk about the other big part of the A7 story: the move to a 64-bit ISA.



The Move to 64-bit

Prior to the iPhone 5s launch, I heard a rumor that Apple would move to a 64-bit architecture with its A7 SoC. I initially discounted the rumor given the pain of moving to 64-bit from a validation standpoint and the upside not being worth it. Obviously, I was wrong.

In the PC world, most users are familiar with the 64-bit transition as something AMD started in the mid-2000s. The primary motivation back then was to enable greater memory addressability by moving from 32-bit addresses (2^32 or 4GB) to 64-bit addresses (2^64 or 16EB). Supporting up to 16 exabytes of memory from the get go seemed a little unnecessary, so AMD’s x86-64 ISA only uses 48-bits for unique memory addresses (256TB of memory). Along with the move from x86 to x86-64 came some small performance enhancements thanks to more available general purpose registers in 64-bit mode.

In the ARM world, the move to 64-bit is motivated primarily by the same factor: a desire for more memory. Remember that ARM and its partners have high hopes of eating into Intel’s high margin server business, and you really can’t play there without 64-bit support. ARM has already announced its first two 64-bit architectures: the Cortex A57 and Cortex A53. The ISA itself is referred to as ARMv8, a logical successor to the present day 32-bit ARMv7.

Unlike the 64-bit x86 transition, ARM’s move to 64-bit comes with a new ISA rather than an extension of the old one. The new instruction set is referred to as A64, while a largely backwards compatible 32-bit format is called A32. Both ISAs can be supported by a single microprocessor design, as ARMv8 features two architectural states: AArch32 and AArch64. Designs that implement both states can switch/interleave between the two states on exception boundaries. In other words, despite A64 being a new ISA you’ll still be able to run old code alongside it. As always, in order to support both you need an OS with support for A64. You can’t run A64 code on an A32 OS. It is also possible to do an A64/AArch64-only design, which is something some server players are considering where backwards compatibility isn’t such a big deal.

Cyclone is a full implementation of ARMv8 with both AArch32 and AArch64 states. Given Apple’s desire to maintain backwards compatibility with existing iOS apps and not unnecessarily fragment the ARM ecosystem, simply embracing ARMv8 makes a lot of sense.

The motivation for Apple to go 64-bit isn’t necessarily one of needing more address space immediately. A look at Apple’s historical scaling of memory capacity tells us everything we need to know:

At best Apple doubled memory capacity between generations, and at worst it took two generations before doubling. The iPhone 5s ships with 1GB of LPDDR3, keeping memory capacity the same as the iPhone 5, iPad 3 and iPad 4. It’s pretty safe to assume that Apple will go to 2GB with the iPhone 6 (and perhaps iPad 5), and then either stay there for the 6s or double again to 4GB. The soonest Apple would need 64-bit from a memory addressability standpoint in an iOS device would be 2015, and the latest would be 2016. Moving to 64-bit now preempts Apple’s hardware needs by 2 full years.

The more I think about it, the more the timing actually makes a lot of sense. The latest Xcode beta and LLVM compiler are both ARMv8 aware. Presumably all apps built starting with the official iOS 7 release and going forward could be built 64-bit aware. By the time 2015/2016 rolls around and Apple starts bumping into 32-bit addressability concerns, not only will it have navigated the OS transition but a huge number of apps will already be built for 64-bit. Apple tends to do well with these sorts of transitions, so starting early like this isn’t unusual. The rest of the ARM ecosystem is expected to begin moving to ARMv8 next year.

Apple isn’t very focused on delivering a larger memory address space today however. As A64 is a brand new ISA, there are other benefits that come along with the move. Similar to the x86-64 transition, the move to A64 comes with an increase in the number of general purpose registers. ARMv7 had 15 general purpose registers (and 1 register for the program counter), while ARMv8/A64 now has 31 that are each 64-bits wide. All 31 registers are accessible at all times. Increasing the number of architectural registers decreases register pressure and can directly impact performance. The doubling of the register space with x86-64 was responsible for up to a 10% increase in performance.

The original ARM architecture made all instructions conditional, which had a huge impact on the instruction space. The number of conditional instructions is far more limited in ARMv8/A64.

The move to ARMv8 also doubles the number of FP/NEON registers (from 16 to 32) as well as widens all of them registers to 128-bits (up from 64-bits). Support for 128-bit registers can go a long way in improving SIMD performance. Whereas simply doubling register count can provide moderate increases in performance, doubling the size of each register can be far more significant given the right workload. There are also new advanced SIMD instructions that are a part of ARMv8. Double precision SIMD FP math is now supported among other things.

ARMv8 also adds some new cryptographic instructions for hardware acceleration of AES and SHA1/SHA256 algorithms. These hardware AES/SHA instructions have the potential for huge increases in performance, just like we saw with the introduction of AES-NI on Intel CPUs a few years back. Both the new advanced SIMD instructions and AES/SHA instructions are really designed to enable a new wave of iOS apps.

Many A64 instructions mode can also work with 32-bit operands, with properly implemented designs simply power gating unused bits. The A32 implementation in ARMv8 also adds some new instructions, so it’s possible to compile AArch32 apps in ARMv8 that aren’t backwards compatible. All existing ARMv7 and 32-bit Thumb code should work just fine however.

On the software side, iOS 7 as well as all first party apps ship already compiled for AArch64 operation. In fact, at boot, there isn’t a single AArch32 process running on the iPhone 5s:

Safari, Mail, everything all made the move to 64-bit right away. Given the popularity of these first party apps, it’s not just the hardware that’s 64-bit ready but much of the software is as well. The industry often speaks about Apple’s vertically integrated advantage, this is quite possibly the best example of that advantage. In many ways it reminds me of the Retina Display transition on OS X.

Running A32 and A64 applications in parallel is seamless. On the phone itself, it’s impossible to tell when you’re running in a mixed environment or when everything you’re running is 64-bit. It all just works.

I didn’t run into any backwards compatibility issues with existing 32-bit ARMv7 apps either. From an end user perspective, navigating the 64-bit transition is as simple as buying an iPhone 5s.

64-bit Performance Gains

Geekbench 3 was among the first apps to be updated with ARMv8 support. There are some minor changes between the new version of Geekbench 3 and its predecessor (3.1/3.0), however the tests themselves (except for the memory benchmarks) haven't changed. What this allows us to do is look at the impact of the new ARMv8 A64 instructions and larger register space. We'll start with a look at integer performance:

Apple A7 - AArch64 vs. AArch32 Performance Comparison
  32-bit A32 64-bit A64 % Advantage
AES 91.5 MB/s 846.2 MB/s 825%
AES MT 180.2 MB/s 1640.0 MB/s 810%
Twofish 59.9 MB/s 55.6 MB/s -8%
Twofish MT 119.1 MB/s 110.2 MB/s -8%
SHA1 138.0 MB/s 477.3 MB/s 245%
SHA1 MT 275.7 MB/s 948.9 MB/s 244%
SHA2 86.1 MB/s 102.2 MB/s 18%
SHA2 MT 171.3 MB/s 203.7 MB/s 18%
BZip2 Compress 4.36 MB/s 4.52 MB/s 3%
BZip2 Compress MT 8.57 MB/s 8.86 MB/s 3%
BZip2 Decompress 5.94 MB/s 7.56 MB/s 27%
BZip2 Decompress MT 11.7 MB/s 15.0 MB/s 28%
JPEG Compress 15.5 MPixels/s 16.8 MPixels/s 8%
JPEG Compress MT 30.8 MPixels/s 33.3 MPixels/s 8%
JPEG Decompress 36.0 MPixels/s 40.3 MPixels/s 11%
JPEG Decompress MT 71.3 MPixels/s 78.1 MPixels/s 9%
PNG Compress 0.84 MPixels/s 1.14 MPixels/s 35%
PNG Compress MT 1.67 MPixels/s 2.26 MPixels/s 35%
PNG Decompress 13.9 MPixels/s 15.2 MPixels/s 9%
PNG Decompress MT 27.4 MPixels/s 29.8 MPixels/s 8%
Sobel 59.3 MPixels/s 58.0 MPixels/s -3%
Sobel MT 116.6 MPixels/s 114.6 MPixels/s -2%
Lua 1.25 MB/s 1.33 MB/s 6%
Lua MT 2.47 MB/s 2.49 MB/s 0%
Dijkstra 5.35 MPairs/s 4.05 MPairs/s -25%
Dijkstra MT 9.67 MPairs/s 7.26 MPairs/s -25%

The AES and SHA1 gains are a direct result of the new cryptographic instructions that are a part of ARMv8. The AES test in particular shows nearly an order of magnitude performance improvement. This is similar to what we saw in the PC space with the introduction of Intel's AES-NI support in Westmere. The Dijkstra workload is the only real regression. That test in particular appears to be very pointer heavy, and the increase in pointer size from 32 to 64-bit increases cache pressure and causes the reduction in performance. The rest of the gains are much smaller, but still fairly significant if you take into account the fact that we're just looking at what you get from a recompile. Add these gains to the ones you're about to see over Apple's A6 SoC and A7 is looking really good from a performance standpoint.

If the integer results looked good, the FP results are even better:

Apple A7 - AArch64 vs. AArch32 Performance Comparison
  32-bit A32 64-bit A64 % Advantage
BlackScholes 4.73 MNodes/s 5.92 MNodes/s 25%
BlackScholes MT 9.57 MNodes/s 12.0 MNodes/s 25%
Mandelbrot 930.2 MFLOPS 929.9 MFLOPS 0%
Mandelbrot 1840 MFLOPS 1850 MFLOPS 0%
Sharpen Filter 805.1 MFLOPS 857 MFLOPS 6%
Sharpen Filter MT 1610 MFLOPS 1710 MFLOPS 6%
Blur Filter 1.08 GFLOPS 1.26 GFLOPS 16%
Blur Filter MT 2.15 GFLOPS 2.47 GFLOPS 14%
SGEMM 3.09 GFLOPS 3.34 GFLOPS 8%
SGEMM MT 6.08 GFLOPS 6.56 GFLOPS 7%
DGEMM 0.56 GFLOPS 1.66 GFLOPS 195%
DGEMM MT 1.11 GFLOPS 3.24 GFLOPS 191%
SFFT 0.72 GFLOPS 1.59 GFLOPS 119%
SFFT MT 1.44 GFLOPS 3.17 GFLOPS 120%
DFFT 1.41 GFLOPS 1.47 GFLOPS 4%
DFFT MT 2.78 GFLOPS 2.91 GFLOPS 4%
N-Body 460.8 KPairs/s 582.6 KPairs/s 26%
N-Body MT 917.6 KPairs/s 1160.0 KPairs/s 26%
Ray Trace 1.52 MPixels/s 2.31 MPixels/s 51%
Ray Trace MT 3.04 MPixels/s 4.64 MPixels/s 52%

The DGEMM operations aren't vectorized under ARMv7, but they are under ARMv8 thanks to DP SIMD support so you get huge speedups there from the recompile. The SFFT workload benefits handsomely from the increased register space, significantly reducing the number of loads and stores (there's something like a 30% reduction in instructions for the A64 codepath compared to the A32 codepath here). The conclusion? There are definitely reasons outside of needing more memory to go 64-bit.

A7 and OS X

Before I spent time with the A7 I assumed the only reason Apple would go 64-bit in mobile is to prepare for eventually deploying these chips into larger machines. A couple of years ago, when the Apple/Intel relationship was at its rockiest I would've definitely said that's what was going on. Today, I'm far less convinced. 

Apple continues to build its own SoCs and invest in them because honestly, no one else seems up to the job. Only recently do we have GPUs competitive with what Apple has been shipping, and with the A7 Apple nearly equals Intel's performance with Bay Trail on the CPU side. As far as Macs go though, there's still a big gap between the A7 and where Intel is at with Haswell. The deficiency that Intel had in the ultra mobile space simply doesn't translate to its position with the big Core chips. I don't see Apple bridging that gap anytime soon. On top of that, the Apple/Intel relationship is very good at this point.

Although Apple could conceivably keep innovating to the point where an A-series chip ends up powering a Mac, I don't think that's in the cards today.



CPU Performance

For our cross-platform CPU performance tests we turn to the usual collection of Javascript and HTML5 based browser tests. Most of our comparison targets here are smartphones with two exceptions: Intel's Bay Trail FFRD and Qualcomm's MSM8974 Snapdragon 800 MDP/T. Both of those platforms are test tablets, leveraging higher TDP silicon in a tablet form factor. The gap between the TDP of Apple's A7 and those two SoCs isn't huge, but there is a gap. I only include those platforms as a reference point. As you're about to see, the work that Apple has put into the A7 makes the iPhone 5s performance competitive with both. In many cases the A7 delivers better performance than one or both of them. A truly competitive A7 here also gives an early indication of the baseline to expect from the next-generation iPad.

We start with SunSpider's latest iteration, measuring the performance of the browser's js engine as well as the underlying hardware. It's possible to get good performance gains by exploiting advantages in both hardware and software here. As of late SunSpider has turned into a bit of a serious optimization target for all browser and hardware vendors, but it can be a good measure of an improving memory subsystem assuming the software doesn't get in the way of the hardware.

SunSpider Javascript Benchmark 1.0 - Stock Browser

Bay Trail's performance crown lasted all of a week, and even less than that if you count when we actually ran this benchmark.  The dual-core A7 is now the fastest SoC we've tested under SunSpider, even outpacing Qualcomm's Snapdragon 800 and ARM's Cortex A15. Apple doesn't quite hit the 2x increase in CPU performance here, but it's very close at a 75% perf increase compared to the iPhone 5. Update: Intel responded with a Bay Trail run under IE11, which comes in at 329.6 ms.

Next up is Kraken, a heavier js benchmark designed to stress more forward looking algorithms. Once again we run the risk of the benchmark becoming an optimization target, but in the case of Kraken I haven't seen too much attention paid to it. I hope it continues to fly under the radar as I've liked it as a benchmark thus far.

Mozilla Kraken Benchmark - 1.1

The A7 falls second only to Intel's Atom Z3770. Although I haven't yet published these results, the 5s performs very similarly to an Atom Z3740 - a more modestly clocked Bay Trail SKU from Intel. Given the relatively low CPU frequency I'm not at all surprised that the A7 can't compete with the fastest Bay Trail but instead is better matched for a middle of the road SKU. Either way, A7's performance here is downright amazing. Once again there's a performance advantage over Snapdragon 800 and Cortex A15, both running at much higher peak frequencies (and likely higher power levels too, although that's speculation until we can tear down an S800 platform and a 5s to compare).

Compared to the iPhone 5, the 5s shows up at over 2.3x the speed of last year's flagship.

Next up is Google's Octane benchmark, yet another js test but this time really used as a design target for Google's own V8 js engine. Devices that can run Chrome tend to do the best here, potentially putting the 5s at a disadvantage.

Google Octane Benchmark v1

Bay Trail takes the lead here once again, but again I expect the Z3740 to be a closer match for the A7 in the 5s at least (it remains to be seen how high the iPad 5 version of Cyclone will be clocked). The performance advantage over the iPhone 5 is a staggering 92%, and obviously there are big gains over all of the competing ARM based CPU architectures. Apple is benefitting slightly from Mobile Safari being a 64-bit binary, however I don't know if it's actually getting any benefit other than access to increased register space.

Our final browser test is arguably the most interesting. Rather than focusing on js code snippets, Browsermark 2.0 attempts to be a more holistic browser benchmark. The result is much less peaky performance and a better view at the sort of moderate gains you'd see in actual usage.

Browsermark 2.0

There's a fair amount of clustering around 2500 with very little differentiation between a lot of the devices. The unique standouts are the Snapdragon 800 based G2 from LG, and of course the iPhone 5s. Here we see the most modest example of the A7's performance superiority at roughly 25% better than the iPhone 5. Not to understate the performance of the iPhone 5s, but depending on workload you'll see a wide range of performance improvements.



iPhone Performance Across Generations

 

We did this in the iPhone 5 review, so I thought I'd continue the trend here. For those users who have no desire to leave iOS and are looking to find the best time to upgrade, these charts offer a unique historical look at iPhone performance over the generations. I included almost all iPhone revisions here, the sole exception being the iPhone 3G which I couldn't seem to find. 
 
All of the devices were updated to the latest supported version of iOS. That's iOS 7 for the iPhone 4 and later, iOS 6.1.3 for the iPhone 3GS and iOS 3.1.3 for the original iPhone.
 
At its keynote, Apple talked about the iPhone 5s offering up to 41x the CPU performance of the original iPhone. Looking at SunSpider however, we get a very different story:

iPhone Generations - SunSpider 1.0

Performance improved by a factor of 100x compared to the original iPhone. You can cut that in half if the iPhone could run iOS 4. Needless to say, Apple's CPU performance estimates aren't unreasonable. We've come a long way since the days when ARM11 cores were good enough.

Even compared to a relatively modern phone like the iPhone 4, the jump to a 5s is huge. The gap isn't quite at the level of an order of magnitude, but it's quickly approaching it. Using the single core iPhone 4 under iOS 7 just feels incredibly slow. Starting with the 4S things get a lot better, but I'd say the iPhone 4 is at the point now where it's starting to feel too slow even for normal consumers (at least with iOS 7 installed).

iPhone Generations - Browsermark 2.0

Browsermark 2.0 gives us a good indication of less CPU bound performance gains. Here we see over a 5x increase in performance compared to the original iPhone, and an 83% increase compared to the iPhone 4.

I wanted to have a closer look at raw CPU performance so I turned to Geekbench 3. Unfortunately Geekbench 3 won't run on anything older than iOS 6, so the original iPhone bows out of this test.

iPhone Generations - Geekbench 3 (Single Threaded)

Single threaded performance scaled by roughly 9x from the 3GS to the iPhone 5s. The improvement since the iPhone 4/4S days is around 6.5x. Single threaded performance often influences snappiness and UI speed/feel, so it's definitely an important vector to scale across.

iPhone Generations - Geekbench 3 (Multi Threaded)

Take into account multithreaded performance and the increase over the 3GS is even bigger, almost 17x now.

The only 3D test I could get to reliably run across all of the platforms (outside the original iPhone) was Basemark X. Again I had issues getting Basemark X running in offscreen mode on iOS 7 so all of the tests here are run at each device's native resolution. In the case of the 3GS to 4 transition, that means a performance regression as the 3GS had a much lower display resolution to deal with.

iPhone Generations - Basemark X (Onscreen)

Apple has scaled GPU performance pretty much in line with CPU performance over the years. The 5s scores 15x the frame rate of the iPhone 4, at a higher resolution too.

iPhone 5s vs. Bay Trail

I couldn't help but run Intel's current favorite mobile benchmark on the iPhone 5s. WebXPRT by Principled Technologies is a collection of browser based benchmarks that use HTML5 and js to simulate a number of workloads (photo editing, face detection, stocks dashboard and offline notes).

iPhone 5s vs. Bay Trail - WebXPRT (Chrome/Mobile Safari)

Granted we're comparing across platforms/browsers here, but the 5s as a platform does extremely well in Intel's favorite benchmark. The 5c by comparison performs a lot more like what we'd expect from a smartphone platform. The iPhone 5s is in a league of its own here. While I don't expect performance equalling the Atom Z3770 across the board, the fact that Apple is getting this close (with two fewer cores at that) is a testament to the work done in Cupertino.

At its launch event Apple claimed the A7 offered desktop class CPU performance. If it really is performance competitive with Bay Trail, I think that statement is a fair one to make. We're not talking about Haswell or even Ivy Bridge levels of desktop performance, but rather something close to mobile Core 2 Duo class. I've broken down the subtests in the table below:

WebXPRT Performance (time in ms, lower is better)
Chrome/Mobile Safari Photo Effects Face Detection Stocks Offline Notes
Apple iPhone 5s (Apple A7 1.3GHz) 878.9 ms 1831.4 ms 436.1 ms 604.6 ms
Intel Bay Trail FFRD (Atom Z3770 1.46GHz) 693.5 ms 1557.0 ms 542.9 ms 737.3 ms
AMD A4-5000 (1.5GHz) 411.2 ms 2349.5 ms 719.1 ms 880.7 ms
Apple iPhone 5c (Apple A6 1.3GHz) 1987.6 ms 4119.6 ms 763.6 ms 1747.6 ms

It's not a clean sweep for the iPhone 5s, but keep in mind that we are comparing to the best AMD and Intel have to offer in this space. I suspect part of why this is close is because both of those companies have been holding back a bit (there's no rush to build the fastest low margin parts), but it doesn't change reality.

 



GPU Architecture

Dating back to the original iPhone, Apple has relied on GPU IP from Imagination Technologies. In recent years, the iPhone and iPad lines have pushed the limits of Img’s technology - integrating larger and higher performing GPUs than all other Img partners. Apple definitely attempted to obfuscate its underlying GPU architecture this time around for some reason.

Dating back to a year ago I got a lot of tips saying that Apple would be integrating Imagination Technologies’ PowerVR Series 6 GPU this generation, but I needed more proof.

The first indication that this isn’t simply a Series 5XT part is the listed support for OpenGL ES 3.0. The only GPUs presently shipping with ES 3.0 support are Qualcomm’s Adreno 3xx (which is only integrated into Qualcomm silicon), ARM’s Mali-T6xx series and PowerVR Series 6. NVIDIA’s Tegra 4 GPU doesn’t support ES 3.0, and it’s too early for Logan/mobile Kepler. With Qualcomm out of the running that leaves Mali and PowerVR Series 6.

“All GPUs used in iOS devices use tile-based deferred rendering (TBDR).”

Apple’s developer documentation lists all of its SoCs as supporting Tile Based Deferred Rendering (TBDR). If you ask Imagination, they will tell you that they are the only ones with a true TBDR implementation. However if you look at ARM’s Mali-T6xx documentation, ARM also claims its GPU is a TBDR.

The real hint comes with anti-aliasing support:

The last line in the screenshot above, MAX_SAMPLES = 8. That’s a reference to 8 sample MSAA, a mode that isn’t supported by ARM’s Mali-T6xx hardware - only PowerVR Series 6 (Mali-T6xx supports 4x and 16x AA modes).

There are some other hints here that Apple is talking about PowerVR Series 6 when it references the A7’s GPU:

“The A7 GPU processes all floating-point calculations using a scalar processor, even when those values are declared in a vector. Proper use of write masks and careful definitions of your calculations can improve the performance of your shaders. For more information, see “Perform Vector Calculations Lazily” in OpenGL ES Programming Guide for iOS.

Medium- and low-precision floating-point shader values are computed identically, as 16-bit floating point values. This is a change from the PowerVR SGX hardware, which used 10-bit fixed-point format for low-precision values. If your shaders use low-precision floating point variables and you also support the PowerVR SGX hardware, you must test your shaders on both GPUs.”

As you’ll see below, both of the highlighted statements apply directly to PowerVR Series 6. With Series 6 Imagination moved to a scalar architecture, and in ImgTec’s developer documentation it confirms that the lowest precision mode supported is FP16.

All of this leads me to confirm what I heard would be the case a while ago: Apple’s A7 is the first shipping mobile silicon to integrate ImgTec’s PowerVR Series 6 GPU.

Now let’s talk about hardware.

The A7’s GPU Configuration: PowerVR G6430

Previously known by the codename Rogue, series 6 has been announced in the following configurations:

PowerVR Series 6 "Rogue"
GPU # of Clusters # of FP32 Ops per Cluster Total FP32 Ops Optimization
G6100 1 64 64 Area
G6200 2 64 128 Area
G6230 2 64 128 Performance
G6400 4 64 256 Area
G6430 4 64 256 Performance
G6630 6 64 384 Performance

Based on the delivered performance, as well as some other products coming down the pipeline I believe Apple’s A7 features a variant of the PowerVR G6430 - a 4 cluster Rogue design optimized for performance (vs. area).

Rogue is a significant departure from the Series 5XT architectures that were used in the iPhone 5, iPad mini and iPad 4. The biggest change? A switch to a fully scalar architecture, similar to the present day AMD and NVIDIA GPUs.

Whereas with 5XT designs we talked about multiple cores, the default replication unit in Rogue is a “cluster”. Each core in 5XT replicated all hardware, while each cluster in Rogue only replicates the shader ALUs and texture hardware. Rogue is still a unified architecture, but the front end no longer scales 1:1 with shading hardware. In many ways this approach is a lot more sensible, as it is typically how you build larger GPUs.

In 5XT, each core featured a number of USSE2 pipelines. Each pipeline was capable of a Vec4 multiply+add plus one additional FP operation that could be dual-issued under the right circumstances. Img never detailed the latter so I always counted flops by looking at the number of Vec4 MADs. If you count each MAD as two FP operations, that’s 8 FLOPS per USSE2 pipe. Each USSE2 was a SIMD, so that’s one instruction across all 4 slots and not some combination of instructions. If you had 3 MADs and something else, the USSE2 pipe would act as a Vec3 unit instead. The same goes for 1 or 2 MADs.

With Rogue the USSE2 pipe is gone and replaced by a Unified Shading Cluster (USC). Each USC is a 16-wide scalar SIMD, with each slot capable of up to 4 FP32 ops per clock. Doing the math, a single USC implementation can do a total of 64 FP32 ops per clock - the equivalent of a PowerVR SGX 543MP2. Efficiency obviously goes up with a scalar design, so realizable performance will likely be higher on Rogue than 5XT.

The A7 is a four cluster design, so that four USCs or a total of 256 FP32 ops per clock. At 200MHz that would give the A7 twice the peak theoretical performance of the GPU in the iPhone 5. And from what I’ve heard, the G6430 is clocked much higher than that.

There’s more graphics horsepower under the hood of the iPhone 5s than there is in the iPad 4. While I don’t doubt the iPad 5 will once again widen that gap, keep in mind that the iPhone 5s has less than 1/4 the number of pixels as the iPad 4. If I were a betting man, I’d say that the A7 was designed not only to drive the 5s’ 1136 x 640 display, but also a higher res panel in another device. Perhaps an iPad mini with Retina Display? There’s no solving the memory bandwidth requirements, but the A7 surely has enough compute power to get there. There's also the fact that Apple has prior history of delivering an SoC that wasn’t perfect for the display (e.g. iPad 3).

GPU Performance

As I mentioned earlier, the iPhone 5s is the first Apple device (and consumer device in the world) to ship with a PowerVR Series 6 GPU. The G6430 inside the A7 is a 4 cluster configuration, with each cluster featuring a 16-wide array of SIMD pipelines. Whereas the 5XT generation of hardware used a 4-wide vector architecture (1 pixel per clock, all 4 color components per SIMD), Series 6 moves to a scalar design (think 16 pixels per clock, one color per clock). Each pipeline is capable of two FP32 MADs per clock, for a total of 64 FP32 operations per clock, per cluster. With the A7's 4 cluster GPU, that works out to be the same throughput per clock as the 4th generation iPad.

Imagination claims its new scalar architecture is not only more computationally dense, but also far more efficient. With the transition to scalar GPU architectures in the PC space we generally saw efficiency go up, so I'm inclined to believe Imagination's claims here.

Apple claims up to a 2x increase in GPU performance compared to the iPhone 5, but just looking at the raw numbers in the table above there's far more shading power under the hood of the A7 than only "2x" the A6.

Mobile SoC GPU Comparison
  PowerVR SGX 543 PowerVR SGX 543MP2 PowerVR SGX 543MP3 PowerVR SGX 543MP4 PowerVR SGX 554 PowerVR SGX 554MP2 PowerVR SGX 554MP4 PowerVR G6430
Used In - iPad 2/iPhone 4S iPhone 5 iPad 3 - - iPad 4 iPhone 5s
SIMD Name USSE2 USSE2 USSE2 USSE2 USSE2 USSE2 USSE2 USC
# of SIMDs 4 8 12 16 8 16 32 4
MADs per SIMD 4 4 4 4 4 4 4 32
Total MADs 16 32 48 64 32 64 128 128
GFLOPS @ 300MHz 9.6 GFLOPS 19.2 GFLOPS 28.8 GFLOPS 38.4 GFLOPS 19.2 GFLOPS 38.4 GFLOPS 76.8 GFLOPS 76.8 GFLOPS

GFXBench 2.7

As always, we'll start with GFXBench (formerly GLBenchmark) 2.7 to get a feel for the theoretical performance of the new GPU. GFXBench 2.7 tends to be more computationally bound than most games as it is frequently used by silicon vendors to stress hardware, not by game developers as an actual performance target. Keep that in mind as we get to some of the actual game simulation results.

GLBenchmark 2.7 - Fill Test (Offscreen 1080p)

Twice the fill rate of the iPhone 5, and clearly higher than anything else we've tested. Rogue is off to a good start.

GLBenchmark 2.7 - Triangle Throughput (Offscreen 1080p)

What's this? A performance regression? Remember what I said earlier in the description of Rogue. Whereas 5XT replicated nearly the entire GPU for "multi-core" versions, multi-cluster versions of Rogue only replicate at the shader array. The result? We don't see the same sort of peak triangle setup scaling we did back on multi-core 5XT parts. I don't suppose this will be a big issue in actual games (and likely a better balance between triangle setup/rasterization and shading hardware), but it's worth pointing out.

GLBenchmark 2.7 - Triangle Throughput, Fragment Lit (Offscreen 1080p)

This is the worst case regression we've seen from 5XT to Rogue. Its clear that per chip triangle rates are much higher on Rogue, but with a many core implementation of 5XT there's just no competing. I suspect this change is part of how Img was able to increase the overall density of Rogue vs. 5XT. Now the question is whether or not this regression will actually appear in games? To find out we turn to the two game simulation tests in GFXBench 2.7, starting with the most stressful one: T-Rex HD.

As always, the onscreen tests run at a device's native resolution with v-sync enabled, while the offscreen results happen at 1080p and v-sync disabled.

GLBenchmark 2.7 - T-Rex HD

As expected, the G6430 in the iPhone 5s is more than twice the speed of the part in the iPhone 5. It is also the first device we've tested capable of breaking the 30 fps barrier in T-Rex HD at its native resolution. Given just how ridiculously intense this test is, I think it's safe to say that the iPhone 5s will probably have the longest shelf life from a gaming perspective of any previous iPhone.

GLBenchmark 2.7 - T-Rex HD (Offscreen 1080p)

The offscreen test helps put the G6430's performance in perspective. Here we show the 5s barely falling behind Qualcomm's Adreno 330 (Snapdragon 800). There are obvious thermal differences between the two platforms, but if we look at the G2's performance (another S800/A330 part) we get a better indication of an apples to apples comparison. Looking at the leaked Nexus 5 (also S800/A330) T-Rex HD scores confirms what we're seeing above. In a phone, it looks like the G6430 is a bit quicker than Qualcomm's Adreno 330.

The Egypt HD tests are much lighter and a lot closer to the workload of a lot of games on the store today, although admittedly it is getting a little light.

GLBenchmark 2.7 - Egypt HD

Onscreen we're at Vsync already, something the iPhone 5 wasn't capable of doing. The 5s should have no issues running most games at 30 fps.

GLBenchmark 2.7 - Egypt HD (Offscreen 1080p)

Offscreen, even at 1080p, performance doesn't really change. Qualcomm's Adreno 330 is definitely faster, at least in the MDP/T. In the G2, its performance lags behind the G6430. I really want to measure power on these things.

3DMark

3DMark finally released an iOS version of its benchmark, enabling us to run the 5s through on yet another test. As we've discovered in the past, 3DMark is far more of a CPU test than GFXBench. While CPU load will range from 6 - 25% during GFXBench, we'll see usage greater than 50% on 3DMark - even during the graphics tests. 3DMark is also heavily threaded, with its physics test taking advantage of quad-core CPUs.

With the iOS release of the benchmark comes a new offscreen rendering mode called Unlimited. The benchmark is the same but it renders offscreen at 720p with the display only being updated once every 100 frames to somewhat get around vsync. Because of the new test we don't have a ton of comparison data, so I've included whatever we've got at this point.

3DMark Unlimited - Ice Storm

3DMark ends up being more of a CPU and memory bandwidth test rather than a raw shader performance test like GFXBench and Basemark X. The 5s falls behind the Snapdragon 800/Adreno 330 based G2 in overall performance. To find out how much of that is GPU performance and how much is a lack of four cores, let's look at the subtests.

3DMark Unlimited - Graphics

The graphics test is more GPU bound than CPU bound, and here we see the G6430 based iPhone 5s pull ahead. Note how well the Moto X does because of its very high clocked CPU cores rather than its GPU. Although this is a graphics test, it's still well influenced by CPU performance.

3DMark Unlimited - Graphics Test 1

3DMark Unlimited - Graphics Test 2

3DMark Unlimited - Physics

The physics test hits all four cores in a quad-core chip and explains the G2 pulling ahead in overall performance. Note that I saw no improvement in this largely CPU bound test, leading me to believe that we've hit some sort of a bug with 3DMark and the new Cyclone core. 

3DMark Unlimited - Physics Test

Basemark X

Basemark X is a new addition to our mobile GPU benchmark suite. There are no low level tests here, just some game simulation tests run at both onscreen (device resolution) and offscreen (1080p, no vsync) settings. The scene complexity is far closer to GLBenchmark 2.7 than the new 3DMark Ice Storm benchmark, so frame rates are pretty low.

Unfortunately I ran into a bug with Basemark X under iOS 7 on the iPhone 5/5c/5s that prevented the off screen test from completing, leaving me only with on-screen results at native resolution.

Basemark X - On Screen

Once again we're seeing greater than 2x scaling comparing the iPhone 5s to the 5.



M7 Motion Coprocessor

In addition to the A7 SoC, the iPhone 5s ships with a discrete “motion coprocessor” called the M7. The M7 acts as a sensor hub, accepting inputs from the accelerometer, gyroscope and compass. The M7 also continuously monitors motion data and can interface with iOS 7’s CoreMotion API. The combination of those two things is designed to enable a completely new class of health and fitness applications.

Fundamentally the role of the M7 was previously serviced by the A6 SoC. Apple broke out its functionality into a separate chip, allegedly to reduce power consumption. With the M7 servicing motion and sensor requests, the A7 SoC can presumably remain asleep for longer. Any application level interaction will obviously require that the A7 wake up, but the M7 is supposed to enable a lot of background monitoring of sensor and motion data at very low power levels.

In the earliest implementation of CoreMotion and the M7, the iPhone 5s in combination with iOS maps will automatically switch between driving and walking directions when it detects that you’ve transitioned from automobile to pedestrian travel. The M7 can also signal iOS that you’re driving, and prevent the OS from popping up requests to join WiFi networks it finds along the way. Hardware enabled situational awareness is a big step for modern smartphones, and the combination of hardware (M7) and software (CoreMotion API) both make a lot of sense. I’ve seen demos in the past of companies looking to parse sensor data (along with other contextual data from your device) to determine when you’re at work, or the gym or when you’re otherwise occupied and can’t immediately respond to a message. Your phone understanding your current state in addition to your location is an extremely important step towards making smartphones even more personal.

The role of the M7 makes a lot of sense - its location physically outside of the A7 SoC is what’s unusual. Apple could’ve just as easily integrated its functionality on die and just power gated the rest of the SoC when idle. The M7’s existence outside of the main A7 die can imply a number of things.

The best theory I have is that we’ll see some deployments of A7 without the associated M7 part. Along those lines, a very smart man theorized that perhaps M7 is built on a different manufacturing process than A7. We’ll have to wait for someone to delayer the M7 to truly find out, but that will tell us a lot about Apple’s motivations here.

Touch ID

I’ve somehow managed to escape most fingerprint sensors on computing devices. I owned a couple of laptops that had the old optical style sensors, but it was always quicker for me to type in a password than it was for me to deal with the sensor. I also wrote a piece on Motorola’s Atrix a few years back, which had a fingerprint sensor built into the power/clock button. The experience back then wasn’t all that great either. If I got into a groove I’d be able to unlock the Atrix by sliding my finger over the sensor every time, but I’d run into problems more often than not. The unpredictable nature of the Atrix’s sensor is what ultimately made it more frustrating than useful. The concept, however, was sound.

No one likes typing in a passcode. Four digit ones aren’t really all that secure unless you have some sort of phone wipe on x-number-of-retries setting. Longer passcodes are more secure but also a pain to type in regularly.

Security is obviously extremely important on modern day smartphones. Personal messages, emails, access to all of your social networking, banking, airline reservations, everything is accessible once you get past that initial passcode on a phone. We also check our devices frequently enough that you want some sort of grace period between requiring another passcode. It’s bad practice, but it’s a great example of convenience trumping security.

When I first heard the rumors of Apple integrating a fingerprint scanner into the iPhone 5s’ home button I was beyond skeptical. I for sure thought that Apple had run out of ideas. Even listening to the feature introduced live, I couldn’t bring myself to care. Having lived with the iPhone 5s for the past week however, I can say that Touch ID is not only extremely well executed, but a feature I miss when I’m not using the 5s.

The hardware is pretty simple to understand. Touch ID is a capacitive fingerprint sensor embedded behind a sapphire crystal cover. The sensor works by forming a capacitor with your finger/thumb. The sensor applies a voltage to one plate of a capacitor, using your finger as the other plate. The resulting electric field between your dermis (layer right below your outward facing skin) and the Touch ID sensor maps out the ridges and valleys of your fingerprint. Because the data that’s being stored is somewhat sub-epidermal, dirt and superficial damage to your finger shouldn’t render Touch ID inoperable (although admittedly I didn’t try cutting any of my fingers to test this theory). The map is recorded (and not an image of your finger) and stored in some secure memory on the A7 SoC itself. The data is stored in an encrypted form and is never uploaded to iCloud or stored anywhere other than on your A7 SoC.

Behind the Touch ID sensor is a similar feeling mechanical switch to what you’d find on the iPhone 5 or 5c. You still get the same click and same resistance. The only physical differences are a lack of the home square printed on the button, and the presence of a steel ring around the button. The steel ring acts as a conductive sensor for your finger. Make contact with the steel ring and Touch ID wakes up (presumably when your phone is in a state where Touch ID interactions are expected). Without making contact with the ring, Touch ID won’t work (I confirmed this by trying to unlock the 5s with my pinky and never touching the ring).

Having a passcode is a mandatory Touch ID requirement. You can’t choose to only use Touch ID to unlock your device. In the event that your fingerprint isn’t recognized, you can always manually type in your passcode.

Your fingerprint data isn’t accessible by third party apps at this point, and even has limited exposure under iOS 7. At present, you can only use Touch ID to unlock your phone or to authorize an iTunes purchase. If you’ve restarted your phone, you need to manually type in your passcode once before you can use Touch ID. If you haven’t unlocked your phone in 48 hours, you’ll need to supply your passcode before Touch ID is an option. Repeated failed attempts (5) to access your 5s via Touch ID will force you to enter a passcode as well.

Other first party services like Game Center logins are also not Touch ID enabled. Even using Touch ID for iTunes purchases requires an opt in step in the Settings menu, it’s not on by default.

Apple has done its best to make the Touch ID configuration and subsequent recognition process as seamless as possible. There’s an initial training period for any finger you want stored by the device. At present you can store up to five unique fingers. At first that sounded like a lot, but I ended up using four of those slots right away. The idea is to store any finger you’d possibly want to use to unlock the device, to avoid Touch ID becoming a limit on how you hold your phone. For me that amounted to both thumbs and both index fingers. The thumbs were an obvious choice since I don’t always hold my phone in the same hand. I added my index fingers for the phone-on-table use case. That left me with a fifth slot that I could use for myself or anyone else I wanted to give access to my phone.

The training process is pretty simple. Just pick up and place your finger on the Touch ID sensor a few times while it maps out your fingerprint. After you complete that step, do it again but focus on the edges of your finger instead. The surface area of the Touch ID sensor is pretty small by comparison to most fingers, so the more data you can give the sensor the better. Don’t worry about giving Touch ID a perfect map of your fingers on the first try. Touch ID is designed to adapt over time. Whenever an unlock attempt fails and is followed by a successful attempt, the 5s will compare the print map from the failed attempt and if it determines that both attempts were made with the same finger it will expand the match database for that finger. Indeed I tested and verified this was working. I deliberately picked a weird angle and part of my thumb to unlock the 5s, which was immediately rejected. I then followed it up with a known good placement and was successful. I then repeated the weird attempt from before and had it immediately succeed. That could’ve been dumb luck or the system working as intended. There’s no end user exposure to what’s going on inside.

With fingers added to Touch ID, everything else works quite smoothly. The easiest way to unlock the iPhone 5s with Touch ID enabled is to press and release the home button and just leave your finger/thumb there. The button press wakes the device, and leaving your digit there after the fact gives the Touch ID sensor time to read your print and unlock the device. In practice, I found it quicker than manually typing in a four digit passcode. Although the process is faster than typing in a passcode, I feel like it could go even quicker. I’m sure Apple is erring on the side of accuracy rather than speed, but I do feel like there’s some room for improvement still.

Touch ID accuracy didn’t seem impacted by oily skin but it quickly turns non-functional if you’ve got a wet finger. The same goes for extremely dirty fingers.

Touch ID ended up being much better than I thought it would be, and it’s honestly the first fingerprint scanner system that I would use on a regular basis. It’s a much better way of unlocking your phone. I’ve been transitioning between the 5s the 5c and the iPhone 5 for the past week, and whenever I’d go to the latter two I’d immediately miss the Touch ID sensor. The feature alone isn’t enough to sell me on the 5s, but it’s definitely a nice addition. My only real wish is that Touch ID would be acceptable as an authentication method in more places, although I understand the hesitation Apple must have in opening the floodgates.

 



Battery Life

Brian did some excellent sleuthing and came across battery capacities for both the iPhone 5s and 5c in Apple’s FCC disclosures. The iPhone 5 had a 3.8V 5.45Wh battery, while the 5s boosts total capacity to 5.96Wh (an increase of 9.35%). The move to a 28nm process doesn’t come with all of the benefits of a full node shrink, and it’s likely not enough to completely offset the higher potential power draw of a much beefier SoC. Apple claims the same or better battery life on the 5s compared to the iPhone 5, in practice the answer is a bit more complicated.

Unlike previous designs, we’ve never had a half node shrink for an s-SKU. Both the iPhone 3GS and iPhone 4S stayed on the same process node as their predecessor and drove up performance. In the case of the 3GS, the performance gains outweighed their power cost, while in the case of the iPhone 4S we generally saw a regression.

The iPhone 5s improves power consumption by going to 28nm, but turns that savings into increased performance. The SoC also delivers a wider dynamic range of performance than we’ve ever seen from an Apple device. There’s as much CPU power here as the first 11-inch MacBook Air, and more GPU power than an iPad 4.

To find out the balance of power savings vs. additional performance I turned to our current battery life test suite, which we first introduced with the iPhone 5 review last year.

We'll start with our WiFi battery life test. As always, we regularly load web pages at a fixed interval until the battery dies (all displays are calibrated to 200 nits).

AT Smartphone Bench 2013: Web Browsing Battery Life (WiFi)

The iPhone 5s regresses a bit compared to the 5 in this test (~12% reduction despite the larger battery). We're loading web pages very aggressively here, likely keeping the A7 cores running at their most power hungry state. Even the 5c sees a bit of a regression compared to the 5, which makes me wonder if we're seeing some of the effects of an early iOS 7 release here.

The story on LTE is a bit different. Here we see a slight improvement in battery life compared to the iPhone 5, although the larger battery of the 5s doesn't seem to give it anything other than parity with the 5c:

AT Smartphone Bench 2013: Web Browsing Battery Life (4G LTE)

Our cellular talk time test is almost entirely display and SoC independent, turning it mostly into a battery capacity test:

Cellular Talk Time

You can see the close grouping of the smaller iPhones at the bottom of the chart. There's a definite improvement in call time compared to the iPhone 5. We're finally up above iPhone 4S levels there.

AT Smartphone Bench 2013: GLBenchmark 2.5.1 Battery Life

Our Egypt HD based 3D battery life test gives us the first indication that Rogue, at least running fairly light code, can be more power efficient than the outgoing 5XT. Obviously the G6430 implemented here can run at fairly high performance levels, so I'm fully expecting peak power consumption to be worse but for more normal workloads there's no regression at all - a very good sign.



Camera

The iPhone 5s continues Apple’s tradition of sensible improvements to camera performance each generation. I was pleased to hear Phil Schiller deliver a line about how bigger pixels are a better route to improving image quality vs. throwing more at the problem. I remember hearing our own Brian Klug deliver almost that exact same message a year earlier when speaking to some engineers at another phone company.

The iPhone 5s increases sensor size compared to the iPhone 5. Last week Brian dug around and concluded that the 5s’ iSight camera sensor likely uses a format very similar to that of the HTC One. The difference here is while HTC opted for even larger pixels (arriving at 4MP), Apple chose a different balance of spatial resolution to light sensitivity with its 8MP sensor.

One thing ingrained in my mind from listening to Brian talk about optics is that there is no perfect solution, everything ultimately boils down to a selection of tradeoffs. Looking at Apple/HTC vs. the rest of the industry we see one set of tradeoffs, with Apple and HTC optimizing for low light performance while the rest of the industry chasing smaller pixel sizes. Even within Apple and HTC however there are differing tradeoffs. HTC went more extreme in pixel size while Apple opted for more spatial resolution.

iPhone 4, 4S, 5, 5S Cameras
Property iPhone 4 iPhone 4S iPhone 5 iPhone 5S
CMOS Sensor OV5650 IMX145 IMX145-Derivative ?
Sensor Format 1/3.2"
(4.54x3.42 mm)
1/3.2"
(4.54x3.42 mm)
1/3.2" ~1/3.0"
(4.89x3.67 mm)
Optical Elements 4 Plastic 5 Plastic 5 Plastic 5 Plastic
Pixel Size 1.75 µm 1.4 µm 1.4 µm 1.5 µm
Focal Length 3.85 mm 4.28 mm 4.10 mm 4.12 mm
Aperture F/2.8 F/2.4 F/2.4 F/2.2
Image Capture Size 2592 x 1936
(5 MP)
3264 x 2448
(8 MP)
3264 x 2448
(8 MP)
3264 x 2448
(8 MP)
Average File Size ~2.03 MB (AVG) ~2.77 MB (AVG) ~2.3 MB (AVG) 2.5 MB (AVG)
From Brian's excellent iPhone 5s Camera Analysis post

Apple moved to 1.5µm pixels, up from 1.4µm in the iPhone 5. Remember that we’re measuring pixel size in a single dimension, so the overall increase in pixel size amounts to around 15%. Apple also moved to a faster aperture (F/2.2 vs. F/2.4 on the iPhone 5) to increase light throughput. The combination can result in significantly better photos than the outgoing 5 when taking photos in low light.

iPhone 5/5c Low Light

iPhone 5s Low Light

With the move to larger pixels, Apple has done away with its 2x2 binning mode in low light settings. The iPhone 5 would oversample each pixel after scene brightness dropped below a certain threshold to improve low light performance. The oversampled image would then be upscaled to the full 8MP, trading off spatial resolution for low light performance. The iPhone 5s doesn’t have to make this tradeoff. In practice I didn’t find any situations where the 5s’ low light performance suffered as a result. It always seemed to produce better shots than the iPhone 5.

iPhone 5/5c

iPhone 5s

Unlike some of the larger flagships we’ve reviewed lately, the iPhone 5s doesn’t ship with optical image stabilization (OIS). We’ve seen devices from HTC, LG and Nokia all ship with OIS, and have generally been pleased with the results. It’s not a surprise that the 5s doesn’t come with OIS as it’s largely the same physical platform as the outgoing 5. Still it would be great to see an Apple device ship with OIS. Perhaps on a larger iPhone.

As is always the case in space constrained camera systems, what Apple could not achieve in the physical space it hopes to make up for computationally. The 5s leverages electronic image stabilization as well as automatic combination of multiple frames from the capture buffer in order to deliver the sharpest shots each time.

Apple’s cameras have traditionally been quite good, not just based on sensor selection but looking at the entire stack from its own custom ISP (Image Signal Processor) and software. With the A7 Apple introduces a brand new ISP. Although we know very little about the new ISP, you can find references to Apple’s H6 ISP if you dig around.

Apple continues to ship one of the better auto modes among smartphone cameras I've used. I still want the option of full manual controls, but for most users Apple's default experience should be a very good one.

Capturing shots under iOS 7 is incredibly quick. Shot to shot latency is basically instantaneous now, thanks to a very fast ISP and the A7’s ability to quickly move data in and out of main memory. It’s impossible to write shots to NAND this quickly so Apple is likely buffering shots to DRAM before bursting them out to non-volatile storage.

 

The new ISP enables a burst capture mode of up to 10 fps. To active burst mode simply hold down the shutter button and fire away. The iPhone 5s will maintain a 10 fps capture rate until the burst counter hits 999 images (which was most definitely tested). Although it took a while to write all 999 images, all of them were eventually committed to NAND.

Photos captured in burst mode are intelligently combined as to not clutter your photo gallery. The camera app will automatically flag what it thinks are important photos, but you’re free to choose as many (or as few) as you’d like to include in your normal browsing view. Since all of the photos captured in burst mode are physically saved, regardless of whether or not you select them to appear among your photos, you can always just pull them off the 5s via USB.

The rear facing camera is paired with a new dual-LED True Tone flash. Rather than featuring a single white LED to act as a flash, Apple equips the iPhone 5s with two LEDs with different color tones (one with a cool tone and one with a warm tone). When set to fire, the 5s’ ISP and camera system will evaluate the color temperature of the scene, pre-fire the flash and determine the right combination of the two LEDs to produce the most natural illumination of the subject.

I’m not a huge fan of flashes, but I have to say that in a pinch the True Tone flash is appreciably better than the single LED unit on the iPhone 5. Taking photos of people with the new True Tone flash enabled produces much warmer and more natural looking results:

True Tone Flash Enabled

Even if your subject happens to be something other than a person I’ve seen really good results from Apple’s True Tone flash.

I still believe the best option is to grab your photo using natural/available light, but with a smartphone being as portable as it is that’s not always going to be an option.

I have to say I appreciate the vector along which Apple improved the camera experience with the iPhone 5s. Improving low light performance (and quality in low light situations where you’re forced to use a flash) is a great message to carry forward.

Front Facing Camera

The iPhone 5s and iPhone 5c share the same upgraded front-facing FaceTime HD camera. The front facing camera gets a sensor upgrade, also with a move to larger pixels (1.9µm up from 1.75µm) while resolution and aperture remain the same at 720p and F/2.4. The larger sensor size once again improves low light performance of the FaceTime HD camera (iPhone 5 left vs. iPhone 5s right):



Video

Apple’s new H6 ISP brings with it a modernization of the video recording options for the iPhone 5s. The default video record mode is still 1080p at 30 fps, but there’s also a new 720p 120 fps “slo-mo” mode as well. In the latter, video is captured at 120 fps but optionally played back at 30 fps in order to achieve a high speed camera/slow motion effect. The result is pretty cool:

In the camera UI you can select what portions of the video you want to play back at 30 fps and what portions you want to leave at full speed. The .mov file is stored on NAND as a ~27Mbps 720p120 without any customizations, however when you share it the entire video is transcoded into a 30 fps format which preserves the slow motion effect.

The slo-mo mode is separate from the standard video recording mode, it’s the next stop on the dial in the new iOS 7 camera app. Video preview in slo-mo mode also happens at 60 fps compared to 30 fps for the standard video record and still image capture modes.

Camera preview frame rate, toggling between slo-mo and normal modes

Adding high speed camera modes to smartphones is a great step in my opinion and a wonderful use of increases in ISP and SoC performance. I would like to see Apple expose a 1080p60 mode as well. Technically 1080p60 does require slightly more bandwidth than 720p120, but I’d hope that Apple targeted both in the design of H6 and simply chose to expose 720p120 as it’s an easier feature to market.

Standard 1080p30 recording is also available:



Display

The iPhone 5s, like the iPhone 5c, retains the same 4-inch Retina Display that was first introduced with the iPhone 5. The 4-inch 16:9 LCD display features a 1136 x 640 resolution, putting it at the low end for most flagship smartphones these days. It was clear from the get-go that a larger display wouldn’t be in the cards for the iPhone 5s. Apple has stuck to its two generation design cadence since the iPhone 3G/3GS days and it had no indication of breaking that trend now, especially with concerns of the mobile upgrade cycle slowing. Recouping investment costs on platform and industrial design are a very important part of making the business work.


Apple is quick to point out that iOS 7 does attempt to make better use of display real estate, but I can’t shake the feeling of being too cramped on the 5s. I’m not advocating that Apple go the route of some of the insanely large displays, but after using the Moto X for the past month I believe there’s a good optimization point somewhere around 4.6 - 4.7”. I firmly believe that Apple will embrace a larger display and branch the iPhone once more, but that time is just not now.

The 5s’ display remains excellent and well calibrated from the factory. In an unusual turn of events, my iPhone 5c sample came with an even better calibrated display than my 5s sample. It's a tradeoff - the 5c panel I had could go way brighter than the 5s panel, but its black levels were also higher. The contrast ratio ended up being very similar between the devices as a result. I've covered the panel lottery in relation to the MacBook Air, but it's good to remember that the same sort of multi-source components exist in mobile as well.

Brightness (White)

Brightness (Black)

Contrast Ratio

CalMAN Display Performance - White Point Average

CalMAN Display Performance - Grayscale Average dE 2000

Color accuracy is still excellent just out of the box. Only my iPhone 5c sample did better than the 5s in our color accuracy tests. Grayscale accuracy wasn't as good on my 5s sample however.

CalMAN Display Performance - Saturations Average dE 2000

Saturations:


 

CalMAN Display Performance - Gretag Macbeth Average dE 2000

GMB Color Checker: 


Cellular

When early PCB shots of the 5s leaked, I remember Brian counting solder pads on the board to figure out if Apple moved to a new Qualcomm baseband solution. Unfortunately his count came out as being the same as the existing MDM9x15 based designs, which ended up what launched. It’s unclear whether or not MDM9x25 was ready in time in order to be integrated into the iPhone 5s design, or if there was some other reason that Apple chose against implementing it here. Regardless of the why, the result is effectively the same cellular capabilities as the iPhone 5.

Apple tells us that the wireless stack in the 5c and 5s is all new, but the lack of LTE-Advanced features like carrier aggregation and Category 4 150Mbps downlink make it likely that we’re looking at a MDM9x15 derivative at best. LTE-A support isn’t an issue at launch, however as Brian mentioned on our mobile show it’s going to quickly become a much needed feature for making efficient use of spectrum and delivering data in the most power efficient way.

The first part is relatively easy to understand. Carrier aggregation gives mobile network operators the ability of combining spectrum across non-contiguous frequency bands to service an area. The resulting increase in spectrum can be used to improve performance and/or support more customers on LTE in areas with limited present day LTE spectrum.

The second part, improving power efficiency, has to do with the same principles of race to sleep that we’ve talked about for years. The faster your network connection, the quicker your modem can transact data and fall back into a lower power sleep state.

The 5s’ omission of LTE-A likely doesn’t have immediate implications, but those who hold onto their devices for a long time will have to deal with the fact that they’re buying at the tail end of a transition to a new group of technologies.

In practice I didn’t notice substantial speed differences between the iPhone 5s, 5c and the original iPhone 5. My testing period was a bit too brief to adequately characterize the device but I didn’t have any complaints. The 5s retains the same antenna configuration as the iPhone 5, complete with receive diversity. As Brian discovered after the launch, the Verizon iPhone 5s doesn’t introduce another transmit chain - so simultaneous voice and LTE still aren’t possible on that device.

Apple is proud of its support for up to 13 LTE bands on some SKUs. Despite the increase in support for LTE bands there are a lot of iPhone 5s SKUs that will be shipped worldwide:

Apple iPhone 5S and 5C Banding
iPhone Model GSM / EDGE Bands WCDMA Bands FDD-LTE Bands TDD-LTE Bands CDMA 1x / EVDO Rev A/B Bands

5S- A1533 (GSM)
5C- A1532

850, 900, 1800, 1900 MHz 850, 900, 1700/2100, 1900, 2100 MHz 1, 2, 3, 4, 5, 8, 13, 17, 19, 20, 25 N/A N/A

5S- A1533 (CDMA)
5C- A1532

800, 1700/2100, 1900, 2100 MHz

5S- A1453
5C- A1456

1, 2, 3, 4, 5, 8, 13, 17, 18, 19, 20, 25, 26
5S- A1457
5C- A1507
850, 900, 1900, 2100 MHz 1, 2, 3, 5, 7, 8, 20 N/A
5S- A1530
5C- A1529
1, 2, 3, 5, 7, 8, 20 38, 39, 40

 

Apple iPhone 5S/5C FCC IDs and Models
FCC ID Model
BCG-E2642A A1453 (5S) A1533 (5S)
BCG-E2644A A1456 (5C) A1532 (5C)
BCG-E2643A A1530 (5S)
BCG-E2643B A1457 (5S)
BCG-E2694A A1529 (5C)
BCG-E2694B A1507 (5C)

WiFi

WiFi connectivity also remains unchanged on the iPhone 5s. Dual band (2.4/5GHz) 802.11n (up to 150Mbps) is the best you’ll get out of the 5s. We expected Apple to move to 802.11ac like some of the other flagship devices we’ve seen in the Android camp, but it looks like you’ll have to wait another year for that.

I don’t believe you’re missing out on a lack of 802.11ac support today, but over the life of the iPhone 5s I do expect greater deployment of 802.11ac networks (which can bring either performance or power benefits to a mobile platform).

WiFi Performance - iPerf

WiFi performance seems pretty comparable to the iPhone 5. The HTC One and Moto X pull ahead here as they both have 802.11ac support.



Final Words

The iPhone 5s is quite possibly the biggest S-update we've ever seen from Apple. I remember walking out of the venue during Apple's iPhone 5 launch and being blown away by the level of innovation, at the platform/silicon level, that Apple crammed into the iPhone 5. What got me last time was that Apple built their own ARM based CPU architecture from the ground up, while I understand that doesn't matter for the majority of consumers - it's no less of an achievement in my eyes. At the same time I remember reading through a sea of disappointment on Twitter - users hoping for more from Apple with the iPhone 5. If you fell into that group last time, there's no way you're going to be impressed by the iPhone 5s. For me however, there's quite a bit to be excited about.

The A7 SoC is seriously impressive. Apple calls it a desktop-class SoC, but I'd rather refer to it as something capable of competing with the best Intel has to offer in this market. In many cases the A7's dual cores were competitive with Intel's recently announced Bay Trail SoC. Web browsing is ultimately where I noticed the A7's performance the most. As long as I was on a good internet connection, web pages just appeared after resolving DNS. The A7's GPU performance is also insanely good - more than enough for anything you could possibly throw at the iPhone 5s today, and fast enough to help keep this device feeling quick for a while.

Apple's move to 64-bit proves it is not only committed to supporting its own microarchitectures in the mobile space, but also that it is being a good steward of the platform. Just like AMD had to do in the mid-2000s, Apple must plan ahead for the future of iOS and that's exactly what it has done. The immediate upsides to moving to 64-bit today are increased performance across the board as well as some huge potential performance gains in certain FP and cryptographic workloads.

The new camera is an evolutionary but much appreciated step forward compared to the iPhone 5. Low light performance is undoubtedly better, and Apple presents its users with an interesting balance of spatial resolution and low light sensitivity. The HTC One seemed to be a very polarizing device for those users who wanted more resolution and not just great low light performance - with the 5s Apple attempts to strike a more conservative balance. The 5s also benefits from the iOS's excellent auto mode, which seems to do quite well for novice photographers. I would love to see full manual control exposed in the camera UI, but Apple's auto mode seems to be quite good for those who don't want to mess with settings. The A7's improved ISP means things like HDR captures are significantly quicker than they were on even the iPhone 5. Shot to shot latency is also incredibly low.

Apple's Touch ID was the biggest surprise for me. I found it very well executed and a nice part of the overall experience. When between the 5s and the 5/5c, I immediately miss Touch ID. Apple is still a bit too conservative with where it allows Touch ID instead of a passcode, but even just as a way to unlock the device and avoid typing in my iCloud password when downloading apps it's a real improvement. I originally expected Touch ID to be very gimmicky, but now I'm thinking this actually may be a feature we see used far more frequently on other platforms as well.

The 5s builds upon the same chassis as the iPhone 5 and with that comes a number of tradeoffs. I still love the chassis, design and build quality - I just wish it had a larger display. While I don't believe the world needs to embrace 6-inch displays, I do feel there is room for another sweet spot above 4-inches. For me personally, Motorola has come the closest with the Moto X and I would love to see what Apple does with a larger chassis. The iPhone has always been a remarkably power efficient platform, a larger chassis wouldn't only give it a bigger, more usable screen but also a much larger battery to boot. I'm not saying that replacing the 4-inch 5s chassis is the only option, I'd be fine with a third model sitting above it in screen size/battery capacity similar to how there are both 13 and 15-inch MacBook Pros.

The lack of 802.11ac and LTE-A support also bother me as the 5s is so ahead of the curve elsewhere in silicon. There's not much I can see to either point other than it's obvious that both will be present in next year's model, and for some they may be features worth waiting for.

At the end of the day, if you prefer iOS for your smartphone - the iPhone 5s won't disappoint. In many ways it's an evolutionary improvement over the iPhone 5, but in others it is a significant step forward. What Apple's silicon teams have been doing for these past couple of years has really started to pay off. From a CPU and GPU standpoint, the 5s is probably the most futureproof of any iPhone ever launched. As much as it pains me to use the word futureproof, if you are one of those people who likes to hold onto their device for a while - the 5s is as good a starting point as any.

Log in

Don't have an account? Sign up now