Bioshock Infinite

Bioshock Infinite is Irrational Games’ latest entry in the Bioshock franchise. Though it’s based on Unreal Engine 3 – making it our obligatory UE3 game – Irrational had added a number of effects that make the game rather GPU-intensive on its highest settings. As an added bonus it includes a built-in benchmark composed of several scenes, a rarity for UE3 engine games, so we can easily get a good representation of what Bioshock’s performance is like.

The first of the games AMD allowed us to publish results for, Bioshock is actually a straight up brawl between the 290X and the GTX 780 at 2560. The 290X’s performance advantage here is just 2%, much smaller than the earlier leads it enjoyed and essentially leaving the two cards tied, which also makes this one of the few games that 290X can’t match GTX Titan. At 2560 everything 290X/GTX 780 class or better can beat 60fps despite the heavy computational load of the depth of field effect, so for AMD 290X is the first single-GPU card from them that can pull this off.

Meanwhile at 4K things end up being rather split depending on the resolution we’re looking at. At Ultra quality the 290X and GTX 780 are again tied, but neither is above 30fps. Drop down to Medium quality however and we get framerates above 60fps again, while at the same time the 290X finally pulls away from the GTX 780, beating it by 14% and even edging out GTX Titan. Like so many games we’re looking at today the loss in quality cannot justify the higher resolution, in our opinion, but it presents another scenario where 290X demonstrates superior 4K performance.

For no-compromises 4K gaming we once again turn our gaze towards the 290X CF and GTX 780 SLI, which has AMD doing very well for themselves. While AMD and NVIDIA are nearly tied at the single GPU level – keep in mind we’re in uber mode for CF, so the uber 290X has a slight performance edge in single GPU mode – with multiple GPUs in play AMD sees better scaling from AFR and consequently better overall performance. At 95% the 290X achieves a nearly perfect scaling factor here, while the GTX 780 SLI achieves only 65%. Curiously this is better for AMD and worse for NVIDIA than the scaling factors we see at 2560, which are 86% and 72% respectively.

Moving on to our FCAT measurements, it’s interesting to see just how greatly improved the frame pacing is for the 290X versus the 280X, even with the frame pacing fixes in for the 280X. Whereas the 280X has deltas in excess of 21%, the 290X brings those deltas down to 10%, better than halving the variance in this game. Consequently the frame time consistency we’re seeing goes from being acceptable but measurably worse than NVIDIA’s consistency to essentially equal. In fact 10% is outright stunning for a multi-GPU setup, as we rarely achieve frame rates this consistent on those setups.

Finally for 4K gaming our variance increases a bit, but not immensely so. Despite the heavier rendering workload and greater demands on moving these large frames around, the delta percentages keep to 13%.

Company of Heroes 2 Battlefield 3
Comments Locked

396 Comments

View All Comments

  • mr_tawan - Tuesday, November 5, 2013 - link

    AMD card may suffer from loud cooler. Let's just hope that the OEM versions would be shipped with quieter coolers.
  • 1Angelreloaded - Monday, November 11, 2013 - link

    I have to be Honest here, it is beast, in fact the only thing in my mind holding this back is lack of feature sets compared to NVidia, namely PhysX, to me this is a bit of a deal breaker compared for 150$ more the 780 Ti gives me that with lower TDP/and sound profile, as we are only able to so much pull from 1 120W breaker without tripping it and modification for some people is a deal breaker due to wear they live and all. Honestly What I really need to see from a site is 4k gaming at max, 1600p/1200p/1080p benchmarks with single cards as well as SLI/Crossfire to see how they scale against each other. To be clear as well a benchmark using Skyrim Modded to the gills in texture resolutions as well to fully see how the VRAM might effect the cards in future games from this next Gen era, where the Consoles can manage a higher texture resolution natively now, and ultimately this will affect PC performance when the standard was 1-2k texture resolutions now becomes double to 4k or even in a select few up to 8k depth. With a native 64 bit architecture as well you will be able to draw more system RAM into the equation where Skyrim can use a max of 3.5 before it dies with Maxwell coming out and a shared memory pool with a single core microprocessor on the die itself with Gsync for smoothness we might see an over engineered GPU card capable of much much more than we thought, ATI as well has their own ideas which will progress, I have a large feeling Hawaii is actually a reject of sorts because they have to compete with Maxwell and engineer more into the cards themselves.
  • marceloviana - Monday, November 25, 2013 - link

    I Just wondering why does this card came with 32Gb gddr5 and see only 4Gb. The PCB show 16 Elpida EDW2032BBBG (2G each). This amount of memory will help a lot in large scenes wit Vray-RT.
  • Mat3 - Thursday, March 13, 2014 - link

    I don't get it. It's supposed to have 11 compute units per shader engine, making 44 on the entire chip. But the 2nd picture says each shader engine can only have up to 9 compute units....?
  • Mat3 - Thursday, March 13, 2014 - link

    2nd picture on page three I mean.
  • sanaris - Monday, April 14, 2014 - link

    Who cares? This card was never meant to compute something.

    It supposed to be "cheap but decent".
    Initially they made this ridiculous price, but now it is around 200-350 at ebay.
    For $200 it worth its price, because it can be used only to play games.
    Who wants to play games at medium quality (not the future ones), may prefer it.

Log in

Don't have an account? Sign up now