For an article like this getting a range of CPUs, which includes the most common and popular, is very important.  I have been at AnandTech for just over two years now, and in that time we have had Sandy Bridge, Llano, Bulldozer, Sandy Bridge-E, Ivy Bridge, Trinity and Vishera, of which I tend to get supplied the top end processors of each generation for testing (as a motherboard reviewer, it is important to make the motherboard the limiting factor).  A lot of users have jumped to one of these platforms, although a large number are still on Wolfdale (Core2), Nehalem, Westmere, Phenom II (Thuban/Zosma/Deneb) or Athlon II.  I have attempted to pool all my AnandTech resources, contacts, and personal resources, together to get a good spread of the current ecosystem, with more focus on the modern end of the spectrum.  It is worth nothing that a multi-GPU user is more likely to have the top line Ivy Bridge, Vishera or Sandy Bridge-E CPU, as well as a top range motherboard, rather than an old Wolfdale.  Nevertheless, we will see how they perform.  There are a few obvious CPU omissions that I could not obtain for this first review which will hopefully be remedied over time in our next update.

The CPUs

My criteria for obtaining CPUs was to use at least one from the most recent architectures, as well as a range of cores/modules/threads/speeds.  The basic list as it stands is:

AMD

Name Platform /
Architecture
Socket Cores / Modules
(Threads)
Speed Turbo L2/L3 Cache
A6-3650 Llano FM1 4 (4) 2600 N/A 4 MB / None
A8-3850 Llano FM1 4 (4) 2900 N/A 4 MB / None
A8-5600K Trinity FM2 2 (4) 3600 3900 4 MB / None
A10-5800K Trinity FM2 2 (4) 3800 4200 4 MB / None
Phenom II X2-555 BE Callisto K10 AM3 2 (2) 3200 N/A 1 MB / 6 MB
Phenom II X4-960T Zosma K10 AM3 4 (4) 3200 N/A 2 MB / 6 MB
Phenom II X6-1100T Thuban K10 AM3 6 (6) 3300 3700 3 MB / 6 MB
FX-8150 Bulldozer AM3+ 4 (8) 3600 4200 8 MB / 8 MB
FX-8350 Piledriver AM3+ 4 (8) 4000 4200 8 MB / 8 MB

Intel

Name Architecture Socket Cores / Modules
(Threads)
Speed Turbo L2/L3 Cache
E6400 Conroe 775 2 (2) 2133 N/A 2 MB / None
E6550 Conroe 775 2 (2) 2333 N/A 4 MB / None
E6700 Conroe 775 2 (2) 2667 N/A 4 MB / None
Q9400 Yorkfield 775 4 (4) 2667 N/A 6 MB / None
Xeon X5690 Westmere 1366 6 (12) 3467 3733 1.5 MB / 12 MB
Celeron G465 Sandy Bridge 1155 1 (2) 1900 N/A 0.25 MB / 1.5 MB
Core i5-2500K Sandy Bridge 1155 4 (4) 3300 3700 1 MB / 6 MB
Core i7-2600K Sandy Bridge 1155 4 (8) 3400 3800 1 MB / 8 MB
Core i7-3930K Sandy Bridge-E 2011 6 (12) 3200 3800 1.5 MB / 12 MB
Core i7-3960X Sandy Bridge-E 2011 6 (12) 3300 3900 1.5 MB / 15 MB
Core i3-3225 Ivy Bridge 1155 2 (4) 3300 N/A 0.5 MB / 3 MB
Core i7-3770K Ivy Bridge 1155 4 (8) 3500 3900 1 MB / 8 MB
Core i7-4770K Haswell 1150 4 (8) 3500 3900 1 MB / 8 MB

There omissions are clear to see, such as the i7-3570K, a dual core Llano/Trinity, a dual/tri module Bulldozer/Piledriver, i7-920, i7-3820, or anything Nehalem.  These will hopefully be coming up in another review.

The GPUs

My first and foremost thanks go to both ASUS and ECS for supplying me with these GPUs for my test beds.  They have been in and out of 60+ motherboards without any issue, and will hopefully continue.  My usual scenario for updating GPUs is to flip AMD/NVIDIA every couple of generations – last time it was HD5850 to HD7970, and as such in the future we will move to a 7-series NVIDIA card or a set of Titans (which might outlive a generation or two).

ASUS HD 7970 (HD7970-3GD5)

The ASUS HD 7970 we use is the reference model at the 7970 launch, using GCN architecture, 2048 SPs at 925 MHz with 3 GB of 4.6 GHz GDDR5 memory.  We have four cards to be used in 1x, 2x, 3x and 4x configurations where possible, also using PCIe 3.0 when enabled by default.

ECS GTX 580 (NGTX580-1536PI-F)

ECS is both a motherboard manufacturer and an NVIDIA card manufacturer, and while most of their VGA models are sold outside of the US, some do make it onto e-e-tailers like Newegg.  This GTX 580 is also a reference model, with 512 CUDA cores at 772 MHz and 1.5 GB of 4 GHz GDDR5 memory.  We have two cards to be used in 1x and 2x configurations at PCIe 2.0.

The Motherboards

The CPU is not always the main part of the picture for this sort of review – the motherboard is equally important as the motherboard dictates how the CPU and the GPU communicates with each other, and what the lane allocation will be.  As mentioned on the previous page, there are 20+ PCIe configurations for Z77 alone when you consider some boards are native, some use a PLX 8747 chip, others use two PLX 8747 chips, and about half of the Z77 motherboards on the market enable four PCIe 2.0 lanes from the chipset for CrossFireX use (at high latency).  We have tried to be fair and take motherboards that may have a small premium but are equipped to deal with the job.  As a result, some motherboards may also use MultiCore Turbo, which as we have detailed in the past, gives the top turbo speed of the CPU regardless of the loading.

As a result of this lane allocation business, each value in our review will be attributed to both a CPU, whether it uses MCT, and a lane allocation.  This would mean something such as i7-3770K+ (3 - x16/x8/x8) would represent an i7-3770K with MCT in a PCIe 3.0 tri-GPU configuration.  More on this below.

For Haswell: Gigabyte Z87X-UD3H, ASUS Z87-Pro, and MSI Z87-GD65 Gaming

The ASUS Z87-Pro nominally offers an PCIe 3.0 x8/x8 layout with an additional PCIe 2.0 x4 from the PCH, however this starts in x1 mode without a change in the BIOS.
The MSI Z87A-GD65 gives us PCI 3.0 x8/x4/x4.
The Gigabyte Z87X-UD3H gives PCIe 3.0 x8/x8 + PCIe 2.0 x4, but this motherboard was tested with MCT off.

For Sandy Bridge and Ivy Bridge: ASUS Maximus V Formula, Gigabyte Z77X-UP7 and Gigabyte G1.Sniper M3

The ASUS Maximus V Formula has a three way lane allocation of x8/x4/x4 for Ivy Bridge, x8/x8 for Sandy Bridge, and enables MCT.
The Gigabyte Z77X-UP7 has a four way lane allocation of x16/x16, x16/x8/x8 and x8/x8/x8/x8, all via a PLX 8747 chip.  It also has a single x16 that bypasses the PLX chip and is thus native, and all configurations enable MCT.
The Gigabyte G1.Sniper M3 is a little different, offering x16, x8/x8, or if you accidentally put the cards in the wrong slots, x16 + x4 from the chipset.  This additional configuration is seen on a number of cheaper Z77 ATX motherboards, as well as a few mATX models.  The G1.Sniper M3 also implements MCT as standard.

For Sandy Bridge-E: ASRock X79 Professional and ASUS Maximus IV Extreme

The ASRock X79 Professional is a PCIe 2.0 enabled board offering x16/x16, x16/x16/x8 and x16/x8/x8/x8.
The ASUS Maximus IV Extreme is a PCIe 3.0 enabled board offering the same as the ASRock, except it enables MCT by default.

For Westmere Xeons: The EVGA SR-2

Due to the timing of the first roundup, I was able to use an EVGA SR-2 with a pair of Xeons on loan from Gigabyte for our server testing.  The SR-2 forms the basis of our beast machine below, and uses two Westmere-EP Xeons to give PCIe 2.0 x16/x16/x16/x16 via NF200 chips.

For Core 2 Duo: The MSI i975X Platinum PowerUp and ASUS Commando (P965)

The MSI is the motherboard I used for our quick Core2Duo comparison pipeline post in Q1 2013 – I still have it sitting on my desk, and it seemed apt to include it in this test.  The MSI i975X Platinum PowerUp offers two PCIe 1.1 slots, capable of Crossfire up to x8/x8.  I also rummaged through my pile of old motherboards and found the ASUS Commando with a CPU installed, and as it offered x16+x4, this was tested also.

For Llano: The Gigabyte A75-UD4H and ASRock A75 Extreme6

Llano throws a little oddball into the mix, being a true quad core unlike Trinity.  The A75-UD4H from Gigabyte was the first one to hand, and offers two PCIe slots at x8/x8.  Like the Core2Duo setup, we are not SLI enabled.
After finding an A8-3850 CPU as another point against the A6-3650, I pulled out the A75 Extreme6, which offers three-way CFX as x8/x8 + x4 from the chipset as well as the configurations offered by the A75-UD4H.

For Trinity: The Gigabyte F2A85X-UP4

Technically A85X motherboards for Trinity support up to x8/x8 in Crossfire, but the F2A85X-UP4, like other high end A85X motherboards, implements four lanes from the chipset for 3-way AMD linking.  Our initial showing on three-way via that chipset linking was not that great, and this review will help quantify this.

For AM3: The ASUS Crosshair V Formula

As the 990FX covers a lot of processor families, the safest place to sit would be on one of the top motherboards available.  Technically the Formula-Z is newer and supports Vishera easier, but we have not had the Formula-Z in to test, and the basic Formula was still able to run an FX-8350 as long as we kept the VRMs cool as a cucumber.  The CVF offers up to three-way CFX and SLI testing (x16/x8/x8).

The Memory

Our good friends at G.Skill are putting their best foot forward in supplying us with high end kits to test.  The issue with the memory is more dependent on what the motherboard will support – in order to keep testing consistent, no overclocks were performed.  This meant that boards and BIOSes limited to a certain DRAM multiplier were set at the maximum multiplier possible.  In order to keep things fairer overall, the modules were adjusted for tighter timings.  All of this is noted in our final setup lists.

Our main memory testing kit is our trusty G.Skill 4x4 GB DDR3-2400 9-11-11 1.65 V RipjawsX kit which has been part of our motherboard testing for over twelve months.  For times when we had two systems being tested side by side, a G.Skill 4x4 GB DDR3-2400 10-12-12 1.65 V TridentX kit was also used.

For The Beast, which is one of the systems that has the issue with higher memory dividers, we pulled in a pair of tri-channel kits from X58 testing.  These are high-end kits as well, currently discontinued as they tended to stop working with too much voltage.  We have a sets of 3x2 GB OCZ Blade DDR3-2133 8-9-8 and 3x1 GB Dominator GT DDR3-2000 7-8-7 for this purpose, which we ran at 1333 6-7-6 due to motherboard limitations at stock settings.

Our Core2Duo CPUs clearly gets their own DDR2 memory for completeness.  This is a 2x2 GB kit of OCZ Platinum DDR2-666 5-5-5.

For Haswell we were offered new kits for testing, this time from Corsair and their Vengeance Pro series.  This is a 2x8 GB kit of DDR3-2400 10-12-12 1.65 V.

Adding in Haswell Testing Methodology, Hardware Configurations
Comments Locked

116 Comments

View All Comments

  • ninjaquick - Tuesday, June 4, 2013 - link

    "If you were buying new, the obvious answer would be looking at an i5-3570K on Ivy Bridge rather than the 2500K"

    Ian basically wanted to get a relatively broad test suite, at as many performance points as possible. Haswell, however, is really quite a bit quicker. More than anything, this article is an introduction to how they are going to be testing moving forward, as well as a list of recommendations for different budgets.
  • dsumanik - Tuesday, June 4, 2013 - link

    2 year old mid range tech is competitive with, and cheaper, than haswell.

    Hence anandtech's recommendation.

    The best thing about haswell is the motherboards, which are damn nice.
  • TheJian - Wednesday, June 5, 2013 - link

    This is incorrect. It is only competitive when you TAP out the gpu by forcing them into situations they can't handle. If you drop the res to 1080p suddenly the CPU is VERY important and they part like the red sea.

    This is another attempt at covering for AMD and trying to help them sell products (you can judge whether it's intentional or not on your own). When no single card can handle the resolutions being forced on them (1440p) you end up with ALL cpu's looking like they're fine. This is just a case of every cpu saying hurry up mr. vid card I'm waiting (or we're all waiting). Lower the res to where they can handle it and cpus start to show their colors. If this article was written with 1080p being the focus (as even his own survey shows 96% of us use it OR lower, and adding in 1920x1200 you end up with 98.75%!!) you would see how badly AMD is doing vs Intel since the video cards would NOT be brick walled screaming under the load.

    http://www.tomshardware.com/reviews/neverwinter-pe...
    An example of what happens when you put the vid card at 1080p where cpu's can show their colors.
    "At this point, it's pretty clear that Neverwinter needs a pretty quick processor if you want the performance of a reasonably-fast graphics card to shine through. At 1920x1080, it doesn't matter if you have a Radeon HD 7790, GeForce GTX 650 Ti, Radeon HD 7970, or GeForce GTX 680 if you're only using a mid-range Core i5 processor. All of those cards are limited by our CPU, even though it offers four cores and a pretty quick clock rate."

    It's not just Civ5. I could point out how inaccurate the suggestions in this 1440p article are all day. Just start looking up cpu articles on other web sites and check the 1080p data. Most cpu articles show using a top card (7970 or 680 etc) so you get to see the TRUTH. The CPU is important in almost EVERY game, unless you shoot the resolution up so high they all score the same because your video card can't handle the job (thus making ANY cpu spend all day waiting on the vid card).

    I challenge anandtech to rerun the same suite, same chips at 1080p and prove I'm wrong. I DARE YOU.

    http://www.hardocp.com/article/2012/10/22/amd_fx83...
    More evidence of what happens when gpu is NOT tapped out. Look at how Intel is KILLING AMD at hardocp. Even if you say "but eventually I'll up my res and spend $600 on a 1440p monitor", you have to understand that as you get better gpu's that can handle that res, you'll hate the fact you chose AMD for a cpu as it will AGAIN become the limiter.
    "Lost Planet is still used here at HardOCP because it is one of the few gaming engines that will reach fully into our 8C/8T processors. Here we see Vishera pull off its biggest victory yet when compared to Zambezi, but still lagging behind 4 less cores from Intel."

    "Again we see a new twist on the engine above, and it too will reach into our 8C/8T. While not as pronounced as Lost Planet, Lost Planet 2 engine shows off the Vishera processors advancements, yet it still trails Intel's technology by a wide margin."

    "The STALKER engine shows almost as big an increase as we saw above, yet with Intel still dealing a crippling gaming blow to AMD's newest architecture."
    Yeah, a 65% faster Intel is a LOT right? Understand if you go AMD now, once you buy a card (20nm maxwell etc? 14nm eventually in 3yrs?) you will CRY over your cpu limiting you at even 1440p. Note the video card Hardocp use for testing was ONLY a GTX 470. That's old junk, he could now run with 7970ghz or 780gtx and up the res to 1080p and show the same results. AMD would get a shellacking.

    http://techreport.com/review/24879/intel-core-i7-4...
    Here, techreport did it in 1080p. 20% lower for A10-5800 than 4770K in crysis 3. It gets worse with farcry 3 etc. In Far Cry 3 i4770k scored 96fps at 1080p, yet AMD's A10-5800 scored a measly 68. OUCH. So roughly 30% slower in this game. HOLY COW man check out Tomb Raider...Intel 126fps! AMD A10-5800 68fps! Does Anandtech still say this is a good cpu to go with? At the rest 98.75% of us run at YOU ARE WRONG. That's almost 2x faster in tomb raider at 1080p! Metro last light INtel 93fps, vs, AMD A10-5800 51fps again almost TWO TIMES faster!

    From Ian's conclusion page here:
    "If I were gaming today on a single GPU, the A8-5600K (or non-K equivalent) would strike me as a price competitive choice for frame rates, as long as you are not a big Civilization V player and do not mind the single threaded performance. The A8-5600K scores within a percentage point or two across the board in single GPU frame rates with both a HD7970 and a GTX580, as well as feel the same in the OS as an equivalent Intel CPU."

    He's not even talking the A10-5800 that got SMASHED at techreport as shown in the link. Note they only used a RAdeon 7950. A 7970ghz or GTX 780 would be even less taxed and show even larger CPU separations. I hope people are getting the point here. Anandtech is MISLEADING you at best by showing a resolution higher than 98.75% of us are using and tapping out the single gpu. I could post a dozen other cpu reviews showing the same results. Don't walk, RUN away from AMD if you are a gamer today (or tomorrow). Haswell boards are supposed to take a broadwell chip also, even more ammo to run from AMD.

    Ian is recommending a cpu that is lower than the one I show getting KILLED here. Games might not even be playable as the A10-5800 was hitting 50fps AVG on some things. What would you hit with a lower cpu avg, and worse what would the mins be? Unplayable? Get a better CPU. You've been warned.
  • haukionkannel - Wednesday, June 5, 2013 - link

    Hmm... If the game is fast enough 1440p then it is fast enough for 1080p... We are talking about serious players. Who on earth would buy 7970 or 580 for gaming in 1080p? That is serious overkill...
    We all know that Intel will run faster if we use 720p, just because it is faster CPU than AMD, nothing new in there since the era of Pentium4 and Athlon2. What this articles telss is that if you want to play games with some serious GPU power, you can save money buy using AMD CPU when using single or even in some cases double GPU. If you go beyond that the CPU becomes a bottleneck.
  • TheJian - Thursday, June 6, 2013 - link

    The killing happened at 1080p also which is what techreport showed. Since 98.75% of us run 1920x1200 or below, I'm thinking that is pretty important data.

    The second you put in more than one card the cpus separate even at 1440p. Meaning, next years SINGLE card or the one after will AGAIN separate the cpus as that single card will be able to wait on the CPU as the bottleneck goes back to cpu. Putting that aside, hardocp showed even the mighty titan at $1000 had stuff turned of at 1080p. So you are incorrect. Is it serious overkill if hardocp is turning stuff off for a smooth game experience? 7970/GTX680 had to turn off even more stuff in the 780GTX review (titan and 780gtx mostly had the same stuff on, but the 7970ghz and 680gtx they compared to turned off quite a bit to remain above 30fps).

    I'm a serious player, and I can't run 1920x1200 with my radeon 5850 which was $300 when I bought it. I'm hoping maxwell will get me 30fps with EVERYTHING on in a few games at 1440p (I'm planning on buying a 27 or 30in at some point) and for the ones that don't I'll play them on my Dell 24 as I do now. But the current cards (without spending a grand and even that don't work) in single format still have trouble with 1080p as hardocp etc has shown. I want my next card to at least play EVERY game at 1920x1200 on my dell, and hope for a good portion on the next monitor purchase. With the 5850 I run a lot of games on my 22in at 1680x1050 to enable everything. I don't like turning stuff down or off, as that isn't how the dev intended me to play their game right?

    Apparently you think all 7970 and 580 owners are all running 1440p and up? Ridiculous. The steam survey says you are woefully incorrect. 98.75% of us are all running 1920x1200 or below and a TON of us have 7970, 680, 580 etc etc (not me yet) and enjoying the fact that they NEVER turn stuff down (well, apparently you still do on some games...see the point?). Only DUAL card owners are running above as the steam survey shows, go there and check out the breakdown. You can see the population (even as small as that 1% is...LOL) has TWO cards running above 1920x1200. So you are categorically incorrect or steam's users change all their resolutions down just to fake a survey?...ROFL. Ok. Whatever. You expect me to believe they get done with the survey and jack it up for UNDER 30fps gameplay? Ok...

    Even here, at 1440p for instance, metro only ran 34fps (and last light is more taxing than 2033). How low do you think the minimums are when you're only doing 34fps AVERAGE? UNPLAYABLE. I can pull anandtech quotes that say you'd really like 60fps to NEVER dip below 30fps minimum. In that they are actually correct and other sites agree...
    http://www.guru3d.com/articles_pages/palit_geforce...
    "Frames per second Gameplay
    <30 FPS very limited gameplay
    30-40 FPS average yet very playable
    40-60 FPS good gameplay
    >60 FPS best possible gameplay

    So if a graphics card barely manages less than 30 FPS, then the game is not very playable, we want to avoid that at all cost.
    With 30 FPS up-to roughly 40 FPS you'll be very able to play the game with perhaps a tiny stutter at certain graphically intensive parts. Overall a very enjoyable experience. Match the best possible resolution to this result and you'll have the best possible rendering quality versus resolution, hey you want both of them to be as high as possible.
    When a graphics card is doing 60 FPS on average or higher then you can rest assured that the game will likely play extremely smoothly at every point in the game, turn on every possible in-game IQ setting."

    So as the single 7970 (assuming ghz edition here in this 1440p article) can barely hit 34fps, by guru3d's definition it's going to STUTTER. Right? You can check max/avg/min everywhere and you'll see there is a HUGE diff between min and avg. Thus the 60fps point is assumed good to ensure above 30 min and no stutter (I'd argue higher depending on the game, mulitplayer etc as you can tank when tons of crap is going on). Guru3d puts that in EVERY gpu article.

    The single 580 in this article can't even hit 24fps and that is AN AVERAGE. So unplayable totally, thus making the whole point moot right? You're going to drop to 1080p just to hit 30fps and you say this and a 7970 is overkill for 1080p? Even this FLAWED article here proves you WRONG.

    Sleeping dogs right here in this review on a SINGLE 7970 UNDER 30fps AVERAGE. What planet are you playing on? If you are hitting 28.2fps avg your gameplay SUCKS!

    http://www.tomshardware.com/reviews/geforce-gtx-77...
    Bioshock infinite 31fps on GTX 580...Umm, mins are going to stutter at 1440p right? Even the 680 only gets 37fps...You'll need to turn both down for anything fluid maxed out. Same res for Crysis 3 shows even the Titan only hitting 32fps and with DETAILS DOWN. So mins will stutter right? MSAA is low, you have two more levels above this which would put it into single digits for mins a lot. Even this low on msaa the 580 never gets above 22fps avg...LOL. You want to rethink your comments yet? The 580's avg was 18 FPS! 1440p is NOT for a SINGLE 580...LOL. Only 25fps for 7970...LOL. NOT PLAYABLE on your 7970ghz either. Clearly this game is 1080p huh? Look how much time in the graph 7970ghz spends BELOW 20fps at 1440p. Serious gamers play at 1080p unless they have two cards. FAR CRY 3, same story. 7970ghz is 29fps...ROFL. The 580 scores 21fps...You go right ahead and try to play these games at 1440p. Welcome to the stutterfest my friend.
    "GeForce GTX 770 and Radeon HD 7970 GHz Edition nearly track together, dipping into the mid-20 FPS range."
    Yeah, Far Cry will be good at 20fps.

    Hitman Absolution has to disable MSAA totally...LOL. Even then 580 only hits 40fps avg.

    Note the tomb raider comment at 1440p:
    "The GeForce GTX 770 bests Nvidia’s GeForce GTX 680, but neither card is really fluid enough to call the Ultimate Quality preset smooth."
    So 36fps and 39fps avg for those two is NOT SMOOTH. 770 dropped to 20fps for a while.

    A titan isn't even serious overkill for 1080p. It's just good enough and for hardocp a game or two had to be turned down even on it at 1080p! The data doesn't lie. Single cards are for 1080p. How many games do I have to show you dipping into the 20's before you get it? Batman AC barely hits 30's avg on 7970ghz with 8xmsaa and you have to turn physx off (not nv physx, phsyx period). Check tom's charts for gpus.

    In hardocp's review of 770gtx 1080p was barely playable with 680gtx and everything on. Upping to 2560x1600 caused nearly every card to need tessellation down and physx off in Metro Last Light. 31fps min on 770 with SSAA OFF and Physx OFF!
    http://hardocp.com/article/2013/05/30/msi_geforce_...
    You must like turning stuff off. I don't think you're a serious gamer until you turn everything on and expect it to run there. NO SACRIFICING quality! Are we done yet? If this article really tells you to pair expensive gpus ($400-1000) with a cheapo $115 AMD cpu then they are clearly misleading you. It looks like is exactly what they got you to believe. Never mind your double gpu comment paired with the same crap cpu adding to the ridiculous claims here already.
  • Calinou__ - Friday, June 7, 2013 - link

    "Serious gamers play at 1080p unless they have two cards."

    Fun fact 2: there are properly coded games out there which will run fine in 2560×1440 on mid-high end cards.
  • TheJian - Sunday, June 9, 2013 - link

    No argument there. My point wasn't that you can't find a game to run at 1440p ok. I could cite many, though I think most wouldn't be maxed out doing it on mid cards and surely aren't what most consider graphically intensive. But there are FAR too many that don't run there without turning lots of stuff off as many sites I linked to show. Also 98.75% of us don't even have monitors that go above 1920x1200 (I can't see many running NON-Native but it's possible), so not quite sure fun fact2 matters much so my statement is still correct for nearly 99% of the world right? :) There are probably a few people in here who care what the top speed of a Veyron SS is (maybe they can afford one, 258mph I think), but for the vast majority of us, we couldn't care less about it since we'll never buy a car over 100K. I probably could have said 50K and still be right for most.

    Your statement kind of implies coders are lazy :) Not going to argue that point either...LOL. Not all coders are lazy mind you...But with so much power on pc's it's probably hard not to be lazy occasionally, not to mention they have the ability to patch them to death afterwards. I can handle 1-2 patches but if you need 5 just to get it to run properly after launch on most hardware (unless adding features/play balancing etc like a skyrim type game etc) maybe you should have kept it in house for another month or two of QA :) Just a thought...
  • Sabresiberian - Wednesday, June 5, 2013 - link

    So, you want an article specifically written for gaming at 2560x1440 to do the testing at 1920x1080?

    Your rant starts from that low point and goes downhill from there.
  • TheJian - Thursday, June 6, 2013 - link

    You completely missed the point. The article is testing for .87% of the market. That is less than one percent. This article will be nice to reprint in 2-3yrs...Then it may actually be relevant. THAT is the point. I think it's a pretty HIGH point, not low, and that fact that you choose to ignore the data in my post doesn't make it any less valid or real. Nice try though :) Come back when you have some data actually making a relevant point please.
  • Calinou__ - Friday, June 7, 2013 - link

    So, all the websites that are about Linux should shut down because Linux has ~1% market share? Nope.

Log in

Don't have an account? Sign up now