Meet The GeForce GTX 780, Cont

With all of that said, GTX 780 does make one notable deviation from GTX Titan. NVIDIA has changed their stock fan programming for GTX 780, essentially slowing down the fan response time to even out fluctuations in fan speeds. NVIDIA has told us that they’ve found that next to loud fans in general, the second most important factor in fan noise becoming noticeable is rapidly changing fan speeds, with the changing pitch and volume drawing attention to the card. Slowing down the response time in turn will in theory keep the fan speed from spiking so much, or quickly dropping (i.e. loading screen) only to have to immediately jump back up again.

In our experience fan response times haven’t been an issue with Titan or past NVIDIA cards, and we’d be hard pressed to tell the difference between GTX 780 and Titan. With that said there’s nothing to lose from this change, GTX 780 doesn’t seem to be in any way worse for it, so in our eyes there’s no reason for NVIDIA not to go ahead with the change.

On that note, since this is purely a software(BIOS) change, we asked NVIDIA about whether this could be backported to the hardware equivalent Titan. The answer is fundamentally yes, but because NVIDIA doesn’t have a backup BIOS system, they aren’t keen on using BIOS flashing any more than necessary. So an official (or even unofficial) update from NVIDIA is unlikely, but given the user community’s adept BIOS modding skills it’s always possible a 3rd party could accomplish this on their own.

Moving on, unlike Titan and GTX 690, NVIDIA will be allowing partners to customize GTX 780, making this the first line of GK110 cards to allow customization. Potential buyers that were for whatever reason disinterested in Titan due to its blower will find that NVIDIA’s partners are already putting together more traditional open air cooler coolers for GTX 780. We can’t share any data about them yet – today is all about the reference card – but we already have one such card in-hand with EVGA’s GeForce GTX 780 ACX.

The reference GTX 780 sets a very high bar in terms of build quality and performance, so it will be interesting to see what NVIDIA’s partners can come up with. With NVIDIA testing and approving all designs under their Greenlight program, all custom cards have to meet or beat NVIDIA’s reference card in factors such as noise and power delivery, which for GTX 780 will not be an easy feat.  However because of this requirement it means NVIDIA’s partners can deviate from NVIDIA’s reference design without buyers needing to be concerned that custom cards are significantly worse than then reference cards, something that benefits NVIDIA’s partners by their being able to attest to the quality of their products (“it got through Greenlight”), and benefitting buyers by letting them know they’re getting something that will be as good as the reference GTX 780, regardless of the specific make or model.

On that note, since we’re talking about card construction let’s quickly dive into overclocking. Overclocking is essentially unchanged from GTX Titan, especially since everything so far is using the reference PCB. The maximum power target remains at 106% (265W) and the maximum temperature target remains at 95C. Buyers will be able to adjust these as they please through Precision X and other tools, but no more than they already could on Titan, which means overclocking is fairly locked down.

Overvolting is also supported in a Titan-like manner, and once again is at the discretion of the card’s partner. By default GTX 780 has a maximum voltage of 1.1625v, with approved overvolting allowing the card to be pushed to 1.2v. This comes in the form of higher boost bins, so enabling overvolting is equivalent to unlocking a +13MHz bin and a +26MHz bin and their requisite voltages. However this also means that those voltages aren’t typically reached with overclocking and overvolting only has a minimal effect, as most overclocking attempts are going to hit TDP limits before they hit the unlocked boost bins.

GeForce Clockspeed Bins
Clockspeed GTX Titan GTX 780
1032MHz N/A 1.2v
1019MHz 1.2v 1.175v
1006MHz 1.175v 1.1625v
992MHz 1.1625v 1.15v
979MHz 1.15v 1.137v
966MHz 1.137v 1.125v
953MHz 1.125v 1.112v
940MHz 1.112v 1.1v
927MHz 1.1v 1.087v
914MHz 1.087v 1.075v

 

Meet The GeForce GTX 780 Software: GeForce Experience, Out of Beta
Comments Locked

155 Comments

View All Comments

  • ambientblue - Thursday, August 8, 2013 - link

    you are a sucker if you are willing to pay so much for twice the vram and 10% performance over the 780... if you got your titans before the 780 was released then sure its a massive performance boost over 680s but that's because the 680s should have been cheaper and named 660, and titan should have cost the amount the 680 was going for. You wont be so satisfied when the GTX 880 comes out and obliterates your titan at half the cost. THen again with that kind of money youll probably just buy 3 of those.
  • B3an - Thursday, May 23, 2013 - link

    I'd REALLY like to see more than just 3GB on high end cards. It's not acceptable. With the upcoming consoles having 8GB (with atleast 5GB+ usable for games) then even by the end of this year we may start seeing current high-end PC GPU's struggling due to lack of graphics RAM. These console games will have super high res textures, and when ported to PC, 3GB graphics RAM will not cut it at high res. I also have 2560x1600 monitors, and theres no way X1/PS4 games are going to run at this res with just 3GB. Yet the whole point of a high-end card is for this type of res as it's wasted on 1080p crap.

    Not enough graphics RAM was also a problem years ago on high-end GPU's. I remember having a 7950 G2X with only 512MB (1GB total but 512MB for each GPU) and it would get completely crippled (single digit FPS) from running games at 2560x1600 or even 1080p. Once you hit the RAM limit things literally become a slideshow. I can see this happening again just a few months from now, but to pretty much EVERYONE who doesn't have a Titan with 6GB.

    So i'm basically warning people thinking of buy a high-end card at this point - you seriously need to keep in mind that just a few months from now it could be struggling due to lack of graphics RAM. Either way, don't expect your purchase to last long, the RAM issue will definitely be a problem in the not too distant future (give it 18 months max).
  • Vayra - Thursday, May 23, 2013 - link

    How can you be worried about the console developments, and especially when it comes to VRAM of all things, when even the next-gen consoles are now looking to be no more than 'on-par' with todays' PC performance in games. I mean, the PS4 is just a glorified midrange GPU in all respects and so is the X1 even though they treat things a bit differently, not using GDDR5. Even the 'awesome' Killzone and CoD Ghost trailers show dozens of super-low-res textures and areas completely greyed out so as not to consume performance. All we get with the new consoles is that finally, 2011's 'current-gen' DX11 tech is coming to the console @ 1080p. But both machines will be running on that 8GB as their TOTAL RAM, and will be using it for all their tasks. Do you really think any game is going to eat up 5 Gigs of VRAM on 1080p? Even Crysis 3 on PC does not do that on its highest settings (it just peaks at/over 3gb I believe?) at 1440p.

    Currently the only reason to own a gpu or system with over 2 GB of VRAM is because you play at ultra settings at a reso over 1080p. For 1080p, which is what 'this-gen' consoles are being built for (sadly...) 2 GB is still sufficient and 3 GB is providing headroom.

    Hey, and last but not least, Nvidia has to give us at least ONE reason to still buy those hopelessly priced Titans off them, right?

    Also, aftermarket versions of the 780 will of course be able to feature more VRAM as we have seen with previous generations on both camps. I'm 100% certain we will be seeing 4 GB versions soon.
  • B3an - Friday, May 24, 2013 - link

    The power of a consoles GPU has nothing to do with it. Obviously these consoles will not match a high-end PC, but why would they have to in order to use more VRAM?! Nothing is stopping a mid-range or even a low-end PC GPU from using 4GB VRAM if it wanted to. Same with consoles. And obviously they will not use all 8GB for games (as i pointed out) but we're probably looking at atleast 4 - 5GB going towards games. The Xbox One for example is meant to use up to 3GB for the OS and other stuff, the remaining 5GB is totally available to games (or it's looking like that). Both the X1 and PS4 also have unified memory, meaning the GPU can use as much as it wants that isn't available to the OS.

    Crysis 3 is a bad example because this game is designed with ancient 8 year old console hardware in mind so it's crippled from the start even if it looks better on PC. When we start seeing X1/PS4 ports to PC the VRAM usage will definitely jump up because textures WILL be higher res and other things WILL be more complex (level design, physics, enemy A.I and so on). Infact the original Crysis actually has bigger open areas and better physics (explosions, mowing down trees) than Crysis 3 because it was totally designed for PC at the time. This stuff was removed in Crysis 3 because they had to make it play exactly the same across all platforms.

    I really think games will eat up 4+GB of VRAM within the next 18 months, especially at 2560x1600 and higher, and atleast use over 3GB at 1080p. The consoles have been holding PC's back for a very very long time. Even console ports made for ancient console hardware with 512MB VRAM can already use over 2GB on the PC version with enough AA + AF at 2560x1600. So thats just 1GB VRAM left on a 3GB card, and 1GB is easily gone by just doubling texture resolution.
  • Akrovah - Thursday, May 23, 2013 - link

    You're forgetting that on these new consoles that 8GB is TOTAL system memory, not just the video RAM. While on a PC you have the 3GB of VRAM here plus the main system memory (probably around 8 Gigs beign pretty stnadard at thsi point).

    I can guarantee you the consoles are not using that entire amount, or even the 5+ availabe for games, as VRAM. And this part is just me talking out of my bum, but I doubt many games on these consoles will use more than 2GB of teh unified memory for VRAM.

    Also I don;t think the res has much to do with the video memory any more. Some quick math and even if the game is tripple buffering a resolution of 2560x1600 only needs about 35 Megs of storage. Unless my math is wrong
    2560x1600 = 4,096,000 pixels at 24 bits each = 98,304,000 bits to store a single completed frame.
    divide by 8 = 12,288,000 bytes /1024 = 12,000 KiB / 1024 = 11.72 MiB per frame.

    Somehow I don't think modern graphic card's video memory has anythign to do with screen resolution and mostly is used by the texture data.
  • inighthawki - Thursday, May 23, 2013 - link

    Most back buffer swap chains are created with 32-bit formats, and even if they are not, chances are the hardware would convert this internally to a 32-bit format for performance to account for texture swizzling and alignment costs. Even so, a 2560x1600x32bpp back buffer would be 16MB, so you're looking at 32 or 48MB for double and triple buffering, respectively.

    But you are right, the vast majority of video memory usage will come from high resolution textures. A typical HD texture is already larger than a back buffer (2048x2048 is slightly larger than 2560x1600) and depending on the game engine may have a number of mip levels also loaded, so you can increase the costs by about 33%. (I say all of this assuming we are not using any form of texture compression just for the sake of example).

    I also hope anyone who buys a video card with large amounts of ram is also running 64-bit Windows :), otherwise their games can't maximize the use of the card's video memory.
  • Akrovah - Friday, May 24, 2013 - link

    I was under the impression that on a 32 bit rendering pipeline the upper 8 bits were used for transparancy calulation, but that it was then filtered down to 24 bits when actually written to the buffer because that's how displays take information.

    But then I just made that up in my own mind because I don't actually know how or when the 32-bit render - 24-bit display conversion takes place.

    But assuming I was wrong and what you say is correct (a likely scenario in this case) my previous point still stands.
  • jonjonjonj - Thursday, May 23, 2013 - link

    i wouldn't be worried. the lowend cpu and apu in consoles wont be pushing anything. the consoles are already outdated and they haven't even been launched. the consoles have 8GB TOTAL memory not 8GB of vram.
  • B3an - Friday, May 24, 2013 - link

    Again, the power of these consoles has absolutely nothing to do with how much VRAM they can use. If a low-end PC GPU existed with 4GB VRAM, it can easily use all that 4GB if it wanted to.

    And it's unified memory in these consoles. It all acts as VRAM. ALL of the 8GB is available to the GPU and games that isn't used by the OS (which is apparently 3GB on the Xbox One for OS/other tasks, leaving 5GB to games).
  • Akrovah - Friday, May 24, 2013 - link

    No, it doesn't all act as VRAM. You still have your data storage objects like all your variables (of which a game can have thousands) AI objects, pathfinding data, all the corodiantes for everything in the current level/map/whatever. Basically the entire state of the game that is operating behind the scenes. This is not insignifigant.

    All the non OS used RAM is available to the games yes, but games are storing a hell of alot more data than what is typically stored in video RAM. Hence PC games that need 2 GB of RAM also oly require 512 Megs of VRAM.

Log in

Don't have an account? Sign up now