NVIDIA Announces Quadro K6000

by Ryan Smith on 7/23/2013 9:00 AM EST
POST A COMMENT

21 Comments

Back to Article

  • Jon Tseng - Tuesday, July 23, 2013 - link

    And Crysis...? ;-p Reply
  • Hameedo - Tuesday, July 23, 2013 - link

    Well, a full GK110 @1.0GHz frequency will be about 40% faster than a GTX680/7970 GHz .. so go figure from that ! Reply
  • trajan2448 - Tuesday, July 23, 2013 - link

    Not a gaming card. Different drivers. Will NOT be faster in games than a Titan or 780. Reply
  • daku123 - Tuesday, July 23, 2013 - link

    Core Geforce DX architecture should be present. So IMO it is faster than Titan while gaming! :) Reply
  • dragonsqrrl - Tuesday, July 23, 2013 - link

    I wouldn't be so sure about that. It's been shown many times before that Quadro's don't have nearly as much of a handicap in games as Geforce cards have in content creation applications. While a similarly spec'd Quadro (which is almost never the case) might perform just under a Geforce counterpart in games, the K6000 is also more powerful than any other card from Nvidia. I wouldn't be at all surprised if this card outperformed a Titan in games. Reply
  • Flunk - Tuesday, July 23, 2013 - link

    Don't buy into the hype, the Quadro cards perform about on par with equivalent Geforces in games. The reason for any performance difference at all is that Nvidia purposely disables certain features on Geforce cards (often with drivers) so that they perform worse. They don't similarly disable features on the Quadros.

    If the GPU is the same and the clocks are the same a Quadro and Geforce will equivalently in games.
    Reply
  • SodaAnt - Tuesday, July 23, 2013 - link

    Not really true. I've had various workstation graphics cards over the years, and all of them performed within a few percent (within the margin of error) compared to the same card that was a consumer version. Reply
  • nathanddrews - Wednesday, July 24, 2013 - link

    Strip it down to 3-4GB and sell it for less (not that they would). You certainly don't need that much for 4K gaming, even with 4xMSAA enabled. Not that you'd want to enable AA and lose what little framerate you have anyway. Reply
  • royalestel - Thursday, September 12, 2013 - link

    They already do that. This is really the refined GPU version of the Geforce 680/780 card with lots of extra memory. Realtime (i.e.Game) performance is definitely better with this card. I would, however, appreciate the kind of graphical comparisons between cards that Anadtech does with the actual GeForce cards. Things like a visual sim application would do. In other words, I want to see anisotropic filtering on flat area of the earth with lots of roads and satellite imagery. I currently always have problems with anisotropic filtering in this application. Either the roads disappear into a blur at 5miles, or the distant terrain at the horizon aliases like crazy. There are some newer techniques that might alleviate this (Fourier Texture Filtering), but these are surely far form being implemented in hardware. Reply
  • IanCutress - Tuesday, July 23, 2013 - link

    I'll take four. Just because, you know, stuff. Reply
  • lmcd - Tuesday, July 23, 2013 - link

    I'd buy the car instead, man. Reply
  • nathanddrews - Wednesday, July 24, 2013 - link

    Cars break down. Reply
  • mavere - Tuesday, July 23, 2013 - link

    Considering the clocks and TDP, I guess Nvidia snatched up all the golden Titan dies for this.

    Shame. I wouldn't mind a cooler, quieter single GPU behemoth for gaming.
    Reply
  • cmikeh2 - Tuesday, July 23, 2013 - link

    The only way that is ever going to happen is if people are willing to pay the same price for that consumer card as NVIDIA would make on the Quadro. I find that unlikely. Reply
  • iMacmatician - Tuesday, July 23, 2013 - link

    On NVIDIA's Quadro spec page, there's a link to a Linecard PDF that gives the peak GFLOPS of all the recent Quadro cards, to what looks to be the nearest 1 GFLOPS.

    The K6000 has 5196 SP GFLOPS (1/3 DP) and the K5000 has 2150 SP GFLOPS, which would give clock speeds of 902 and 700 MHz respectively.
    Reply
  • chizow - Wednesday, July 24, 2013 - link

    Guess this opens the door for the rumored GTX Titan Ultra, possibly in November to counter AMD's Volcanic Islands. If they have binned parts with all fully-enabled GK110 functional units, they have fully-enabled ASICs that are too leaky to make it as 225W Tesla/Quadro parts. Fully expect 250W Titan Ultras....$1500 np, all the Titan early adopters will be happy to get owned by Nvidia all over again for 192sp and 6GB more.

    Also expect to see a few more GK110-based SKUs, like the rumored 2304SP, 320-bit version. Shuffle Titan and 780 pricing based on how AMD competes with Volcanic Islands GCN 2.0 update on 28nm.
    Reply
  • vailr - Saturday, July 27, 2013 - link

    Too bad the new (yet-to-be-released) 2013 Mac Pro won't be able to use these. Unless maybe Apple decides to offer an updated "cheese cutter" case Mac Pro (with an updated nVidia GPU) in tandem with their "round cylinder" AMD-only dual GPU Mac Pro. Reply
  • Gothmoth - Tuesday, July 30, 2013 - link

    lets hope they have better drivers for this card.

    i have problems with nvidia drivers newer then 314.22.

    all drivers newer then 314.22 give me sporadic freezes for 2-4 seconds.
    looking in the event log i see that "nvlddmkm" was causing a problem every time these freezes happen.

    i have just installed a fresh win7 64 bit, because i bought a news SSD, and tried the latest WHQL driver. i immediately noticed that, using IE, the latest 320.49 driver has this "micro-freezes" too.

    going back to 314.22 solved all my problems.

    i don´t understand how nvidia gets a "WHQL" stamp for these new drivers.
    and it´s really sad to see this happen.
    i never had big problems with nvidia drivers before and that was a reason i bought nvidia since the gefore 256.

    instead of tweaking benchmarks and game performance they should focus on stability!!
    Reply
  • Toyist - Thursday, August 01, 2013 - link

    Have to diagree. I've had mostly Pro cards and my son gaming cards on the same machines. We used to do side-by-side tests. I could render a 3D image in about 1/6 the time he could, but in some games he could get almost double the framerates. I would spend a lot more on my cards. Also, until recently (last five years or so) the architectures of pro cards and game cards were different - you couldn't get a pro and game version of the "same" card. Used to be Pro cards were OpenGL and gaming things like DirectX. Game companies would write to DirectX not OpenGL and while they would run on the Pro cards the Pro cards had only weak support for things liek DirectX.

    There is also a common misconception about manufacturers "disabling" the same card for pro and game markets. They do, but there are other factors. Binning is a big one (test the same chips and the "super performers" go in the pro cards and the others game cards). Also some of the "disabling" is more like tuning. If I drop a frame or two every second in a game no big deal. If I drop a frame when rendering a high-budget movie big issue. Different firmware very similar, but "disabled" drivers.
    Reply
  • nonebeforenoneafter - Wednesday, August 07, 2013 - link

    It would be very hard for me to imagine buying one of these rather than a 4 X Titan 6GB Liquid cooled cards paired with a High end 4way-SLI MOBO and 1200 watt PSU for many reasons. Water cooled Titans have a higher speed than standard titans. GPU rendering Cuda/OpenCL accross 4 gpus or even 3 would be incredible. Games now and future would be insane fast. Don't need to see 3D viewports in extreme quality too often. Fast enough to view thousands of objects. Should be organizing scenes and polycount anyway. Aftereffects would gain a lot from 4 Titans. Reply
  • royalestel - Thursday, September 12, 2013 - link

    There's always tradeoffs. You'd save money, but lose chassis space and use more power. Those might be great options, come to think of it. Reply

Log in

Don't have an account? Sign up now