Triple Buffering: Why We Love It

by Derek Wilson on 6/26/2009 12:34 AM EST
Comments Locked

184 Comments

Back to Article

  • CallsignVega - Thursday, November 12, 2009 - link

    Are you people sure that triple buffering is being enabled even with tools like DXTweaker and ATI Tray tools? In theory, shouldn't the card be working just as hard with triple buffering on as it does with VSync disabled?

    I've tested EvE online with my ATi HD5870. With Vsync off, my FPS are of course very high and I can see the high load on the GPU in regards to amperage used and heat produced. If I turn VSync on, it uses very little amperage and creates very little heat only running at 60fps.

    In triple buffer theory, shouldn't the graphics card be working just as hard but only displaying the 60 FPS with Vsync on? I've mad profiles for EvE under ATi Tray tools and forced triple buffering on, but I get the same results as with VSync on, very low amperage and heat increase. This leads me to believe that triple buffering is in fact, not being applied.

    Could all of these so called forcing of Direct3D triple buffering apps really not be doing anything? It could just be placebo effect and people think it's working just because they marked the check-box. Theres no way I could see triple buffering actually working with my HD5870 in Direct3D with such a very very low stress on the card compared to VSync off. Besides polling the stress level of the card, is there any other way to see if triple buffering is ACTUALLY turned on and working?
  • Skakruk - Wednesday, July 22, 2009 - link

    From the last page of the article:

    "...they have just as little input lag as double buffering with no vsync at the start of output to the monitor."

    I believe that is misinformation, as in my experience input lag results can vary significantly from game to game.

    For example, enabling V-Sync and Triple Buffer in UT2004 results in input lag so bad that the game is all but unplayable, but enabling V-Sync and Triple Buffer in CoD4 creates barely any input lag at all.

    Although CoD4 was very much playable, neither game exhibited "just as little input lag as double buffering with no vsync".

    NOTE: For both UT2004 and CoD4, I was forcing V-Sync and Triple Buffer via D3DOverrider, and all mouse filtering etc. was disabled in both games.




  • griffhamlin - Thursday, July 16, 2009 - link

    hang yourself morron. you didn't understand a single piece of this article LMAO !

    enabling tripple buffering without Vsync ??!!!. tripple buffering IS MEANT FOR VSYNC. for avoid the slowdown due to double buffer.
    And vsync is made for avoid TEARING. GOT IT ? u_o.

    Stop posting crap.
    you want vsync ? go for tripple buffer. dont think , do it...
    you don't want vsync ? whatever you enable tripple or double buffer. it doesn't matter.
  • davidri - Sunday, July 26, 2009 - link

    Wow, you're a rude
  • andy80517 - Wednesday, October 31, 2012 - link

    LOL good comment XD !
  • davidri - Tuesday, July 14, 2009 - link

    "So there you have it. Triple buffering gives you all the benefits of double buffering with no vsync enabled in addition to all the benefits of enabling vsync. We get smooth full frames with no tearing."

    This statement is not true. I enabled triple buffering without vsync on a GTX 280/27" LCD @ 60hz and ran Elder Scrolls Oblivion. The vertical tearing was awful.

    I have been running the 280 with vsync and double buffering enabled on because I'll take the performance overhead to alleviate vertical tearing.
  • jp777cmoe - Saturday, July 18, 2009 - link

    I read the article but im not sure about vsync and triple buffering..
    would it work well? i use a samsung 2233rz 120hz lcd monitor
  • griffhamlin - Wednesday, July 15, 2009 - link

    *Facepalm*
  • davidri - Wednesday, July 15, 2009 - link

    Thank you for the duly and astute feedback.
  • MamiyaOtaru - Thursday, July 16, 2009 - link

    RTFA. DX game has to support triple buffering for you to get the benefit. If you toggled it on in the control panel, you were toggling it on for opengl games only.

    But *if* the game supports it, you'd be better off with triple buffering for avoiding tearing, though I still prefer double buffering with no vsync for responsiveness
  • davidri - Sunday, July 26, 2009 - link

    I'll just stick to vsync on with whatever the default frame buffer is in the Nvidia control panel. I get very good performance (most games I play run at 60fps) and no vertical tearing. I don't care to deal with third party apps.
  • griffhamlin - Thursday, July 16, 2009 - link

    you can force tripple buffering in DX . some appz exist for that.
    D3DOverrider , for one...
  • Muhammed - Wednesday, July 8, 2009 - link

    I must say , Excellent article up to the page number 2 , after that things started to get REAL messy .

    I don't consider myself too stupid , nor too genius , but I am confident I am smart , and everything was fine till the second page , where you explained the principles of the idea , I quickly understood it just from one concentrated read , but the horses example is simply HORRIBLE , I understand you didn't want to waste 9 pages on a simple thing like V-Sync , hence so you wrapped up the concept quickly , but this has left us readers really confused .

    Firstly , you started slow (page 2), elaborating on every little detail , then you provided an example that should make the picture even clearer , but on the contrary .. you put a lot of possibilities and new concepts into this example , and you successfully made it MORE COMPLEX , instead of being SIMPLER .

    Secondly , horrible elaboration in the example made it even more convoluted , adding the complexity into the equation = HORRIBLE Example .

    I am waiting for a follow up article .. one with even 18 pages , I will read them all .. every last letter , for this is the price of knowledge , just remember SIMPLIFY and ELABORATE .

    Thanks you for your understanding .
  • quarup - Tuesday, July 7, 2009 - link

    The following seems confusion, could you please clarify or reword it:

    "In double buffering, this happens with every frame even if the next frames done after the monitor is finished receiving and drawing the current frame (meaning that it might not be displayed at all if another frame is completed before the next refresh)."

    It sounds like it says two contradicting things about double-buffering + vsync:
    1. a swap buffer happens once per frame
    2. a frame might be skipped if we're rendering frames too fast (this sounds more like triple buffering?)

    Also:

    "With triple buffering, front buffer swaps only happen at most once per vsync."

    Isn't this true with double buffer + vsync, too?
  • pakotlar - Sunday, July 5, 2009 - link

    t.buffering is great, but tighter integration between the abstraction layer and developer tools along with general programming protocols on GPU's, should allow (maybe with the use of a dynamic LOD system like SPVO) should kill the need for t.buffering or vsync. there has to be a better solution in place today for homogenous hardware.
  • happymanz - Saturday, July 4, 2009 - link

    Hi,

    I mostly play games at 1024x768@120hz if they are non competative, and 800x600\640x480@160hz if they are competative. (I am not able to notice any tearing at 160hz)

    What settings are recommended for gamers using CRT or 120hz LCD monitors? (most people will not notice any tearing even at 120hz)

    Alot of older (and still popular) games run various versions of the quakeengine where physics are affected by the FPS (I'm no expert on the matter)

    (I have yet to see any LCD monitor getting close in terms of imagequality, and so far it seems you cant have your cake and eat it aswell when it comes to different types of panels)
  • urebelscum - Thursday, July 2, 2009 - link

    Nice article; I loved seeing the example. However, from reading all the commons, I think a follow up article is needed. First, another example is needed: when rendering is slower than monitor refresh. I thought I pictured what would happen, but now I'm not sure. Maybe another example, covering what happens when a game drops below the 60 fps threashhold, but if the other example is clear, maybe not. The rest of the followup should include more info: basically add render ahead to the first example, and a list of which games use true triple buffer, and which use mis-named render ahead.

    The last is why I still don't use "triple buffering" all the time. It seems most games I play are calling render ahead the wrong name, so I leave the wrongly called "triple buffering" "disabled".

    Two things that probably are beyond the scope are: how to tell if a game is using true triple buffering or if it's using render ahead, and what devs need to do to use true triple buffering. (I'm following a couple open source games that say they support triple buffering, but might be using render ahead.)
  • DerekWilson - Thursday, July 2, 2009 - link

    Thanks for the feedback. I'm looking into the possibility of a follow up and appreciate your suggestions of things to look into.
  • castanza - Tuesday, June 30, 2009 - link

    Enabling triple buffering whenever possible is not the right idea.

    Again, the choices are:
    1) double buffer w/o vsync
    2) double buffer w/ vsync
    3) triple buffer w/ vsync

    Suppose your screen refreshes @ 60 Hz (pretty common now).

    The key question is: can your machine pump out 60 fps consistently for the game in question?

    If it can, then you probably want double buffer w/ vsync. I enabled this setting in L4D and I can enjoy NO TEARING with minimum lag because L4D gives very nice frame rates on my machine, not often dipping below 60 fps.

    If it can't, then your choice depends on how well you can tolerate lag in this particular game. In some games you may not notice it, in others you will. I find it less noticeable in driving games than fps games for example. I tried this setting in L4D, and I found the added lag unacceptable. Anyway if you don't mind a bit of extra input lag in this particular game, then you want triple buffer w/ vsync. Otherwise, if getting rid of lag is more important than eliminating tearing, you'll choose double buffer w/o vsync.

    That's my $0.02 on this subject :)
  • Hrel - Monday, June 29, 2009 - link

    Sounds to me like the best option would be a tripple buffer with a rendering que; coupled with a rendered frames management feature.

    From this, I THINK the to get the best image a COMPLETED frame should be shown every single refresh of the monitor.

    And in order to reduce lag as much as possible, the most recent fully rendered frame should be put out to the monitor; and all the older frames should be thrown out. Which means skipping could occur but with proper management it should be so minor that we really won't notice.

    Especially as monitor refresh rates go up to 120HZ and beyond.

    Comments anyone???
  • DerekWilson - Wednesday, July 1, 2009 - link

    "skipping" doesn't occur in games like it does with a video -- there is not a set number of frames that must be rendered in a set amount of time. The action happens independently of the frames rendered in a game, while for a video you there is an exact framerate that needs to be maintained in order to see smooth motion as it was captured.

    in the old days, console games would tie the game timer to framerate which was always set to vsync. If frame rate dropped from 60 FPS to 30, the game would actually slow down (when too much action was going on on the screen). Modern PC games do not rely on framerate to time their game, in stead framerate is a snapshot of the game at a certain time.

    if you drop all but the most recently completed frame, then you are just doing triple buffering the way this article describes.
  • ufon68 - Monday, March 28, 2016 - link

    This not completely accurate.

    While usually, and for a good reason, physical simulation indeed "ticks" independent of FPS, the actual gameplay logic is usually tied to the frames being displayed.
    You don't see the game slowing down or speeding up because the simulation takes into account the FPS speed, ie. it multiplies everything by the delta time which is the time it took to tick the last frame.
    For instnance to translate an object along a vector at a certain speed, you move it every frame by: Direction * Speed * DeltaTime (DeltaTime being the time it took the game to tick the last frame in seconds, given a constant fps[just for simplifacation purposes, the fps can move up and down and this will still work] of 10, the DeltaTime is 0.1 for this given frame/tick)

    So it's not correct that what you see are snapshots of what's going on in the game world, it just gives you that impression by doing this neat trick.
  • VinnyV - Monday, June 29, 2009 - link

    I just wanted to say that I really appreciate this article. I think I just went from understanding about 10% of what is usually discussed on this site to about 11 or 12%. Thanks! Please post more articles like this!
  • iwodo - Monday, June 29, 2009 - link

    I see this as an Direct X Problem only? May be we should call Microsoft to improve on it... ( Too Late for Direct X 11?? )
  • Dospac - Sunday, June 28, 2009 - link

    Derek, it would be interesting to get to the bottom of the multi-GPU input delay issue as well as devise a quantitative way to test the delay with various setups. It's confounding that this hasn't been investigated sooner and been sorted out. The potential PQ improvement is well worth your efforts. Thank you!
  • DerekWilson - Sunday, June 28, 2009 - link

    getting to the bottom of why delay happens conceptually isn't that complex -- there are a lot of issues in interGPU communication and synchronization that can cause issues.

    quantitative testing is possible but pretty expensive ... i'll see if i can convince Anand to invest in the equipment :-)
  • DerekWilson - Sunday, June 28, 2009 - link

    Added a note at the end of the article to try and help clear the air about the confusion over triple buffering as a page flipping method and flip queues (render ahead) with three buffers.

    I also wanted to note that this topic is not just confusing for gamers -- game developer do not always get their labeling right and sometimes refer to flip queues a "triple buffering" incorrectly.

    I do apologize for not addressing this issue at publication, but I hope this helps to clear the air.
  • Touche - Sunday, June 28, 2009 - link

    Have you seen this?

    http://msdn.microsoft.com/en-us/library/ms796537.a...">http://msdn.microsoft.com/en-us/library/ms796537.a...
    http://msdn.microsoft.com/en-us/library/ms893104.a...">http://msdn.microsoft.com/en-us/library/ms893104.a...
  • DerekWilson - Wednesday, July 1, 2009 - link

    What they are showing is 1 frame render ahead with vsync. In MS DX terms, this is a flip chain with 2 back buffers and a present interval of one.

    This is them calling it triple if uses three total buffers. This is still a flip queue and should be referred to as such to avoid confusion.
  • mikeev - Sunday, June 28, 2009 - link

    Add me to the list of people who tried triple buffering but had to turn it OFF due to the input lag.

    I ran the test in L4D anyway. It was unbearable. I couldn't hit a thing. The input lag was actually noticeably less with double buffering + vsync ON.

    Maybe I'm doing something wrong, but my results do not jive with this article at all.
  • rna - Sunday, June 28, 2009 - link

    From my own fiddling around,

    Left 4 Dead, "V-sync with Triple Buffering" = Unbearable input lag.
    Doom 3 with Triple Buffering forced on in the nVidia control panel and v-sync turned on feels as responsive as with v-sync disabled.

  • DerekWilson - Wednesday, July 1, 2009 - link

    I still haven't confirmed with the developer, but I now think the "triple buffering" that L4D uses is actually a flip queue with 1 frame render ahead (two back buffers; three total buffers).

    Doom 3 with triple buffering forced in the nvidia control panel with vsync will work exactly as described in this article ...

    To double check, I asked NVIDIA for specifics -- triple buffering as forced in their control panel (which only works for OpenGL games) performs exactly the way this article describes that it should.
  • DerekWilson - Sunday, June 28, 2009 - link

    I will do my best to develop a quantitative input lag test. If I can achieve that goal then I will test this and other reported issues.
  • Dospac - Sunday, June 28, 2009 - link

    It may be due to Crossfire or ATI's drivers, but enabling vsync and forcing triple buffering with D3Doverrider wrecks the input responsiveness on my system(Vista64 and 3870X2)

    I used to always play with Vsync and triple buffering when I was on a 120Hz CRT. With a 60Hz LCD, shooters are unplayable. This article is giving inaccurate advice when it states that input lag is not increased.
  • DerekWilson - Sunday, June 28, 2009 - link

    multiGPU options and triple buffering do not play nice together at this point in time.
  • bobjones32 - Sunday, June 28, 2009 - link

    I just fired up Left 4 Dead and tested the various vsync options:

    -vsync disabled
    -vsync enabled, double buffering
    -vsync enabled, triple buffering
    -vsync disabled in game, forced through D3DOverrider with triple buffering

    My observations (note - I can retain a perfect 60fps on my 60Hz monitor):
    1) triple-buffered vsync still had a noticeable amount of mouse lag
    2) double-buffered vsync seemed to have *less* lag, oddly enough
    3) There was some odd hitching that took place every second with vsync on, regardless of triple buffering settings.

    Oddly enough, mouse lag in Half-Life 2: Episode Two (with either double buffering or triple buffering) was much less noticeable, but that hitching every second was still there.


    Derek - any idea why this might be the case?
  • Scalarscience - Sunday, June 28, 2009 - link

    Are you using Crossfire, SLI or a dual gpu card?
  • bobjones32 - Sunday, June 28, 2009 - link

    No, single-card 4870 setup.
  • DerekWilson - Wednesday, July 1, 2009 - link

    I have no idea why you would see the hitching issue.

    I do believe my guess about how L4D does it was wrong though: I now think they use a flip queue with three total buffers rather than the technique described in this article.
  • Ruud van Gaal - Friday, May 25, 2012 - link

    One thing I had in my own game with a 1 second hitch was exposure calculation. Mipmapping (through the gfxcard) a single frame down to 1 pixel actually took quite a bit of time and was noticable by a dip in the framerate. Turning off this auto-exposure mipmapping solved it (for me).
  • oralpain - Saturday, June 27, 2009 - link

    Even though I've been well aware of how triple buffering works, and how to enable it, I rarely use it.

    Even on my 60Hz LCDs, I usually have a better subjective experience with vsynch off. Not exactly sure why this is, but higher FSP, even if I'm not seeing the visual effects of it, is worth it over an elemination in the occasional tearing I notice.

    In the handful of games where I do prefer vsych, I've always tried to use triple buffering.
  • billythefisherman - Saturday, June 27, 2009 - link

    Ok triple buffering can undeniably offer benefits in certain situations but saying turn on triple buffering always gives you a better experience is nonsense.

    Take for example the case where you are running under 60hz in this case over an average amount of frames you'll experience exactly the same amount of lag as double buffering with vsync but now you have lost some video memory that could be used to hold that top level mip map your currently staring at and so see a lower quality picture at that point.

    Another problem is that with tripple buffering your lag is unequally distributed because each frame takes a variable amount of time to create/render which could give a wierd feel to your play compared to what you otherwise maybe accustomed to - and it'll get worse with lower frame rates.

    Another problem is that developers may take advantage of this lag on the CPU side of things if they've coded for vsync double buffered (which they invariably will do in most modern games) and know that on faster machine they may have more CPU resources so they may speed up the AI update or process more physics calculations with this left over CPU time.

    Ok they may not but a game engine is not a straight forward simple system that runs everything on the GPU: it consists of many parts all working together to try to produce the lowest lag possible from input to output and a vsynced double buffered scenario provides the easiest environment to tune that system.

    Its no where near as clear cut as this article makes out.
  • DerekWilson - Saturday, June 27, 2009 - link

    quote:

    "Take for example the case where you are running under 60hz in this case over an average amount of frames you'll experience exactly the same amount of lag as double buffering with vsync"


    This is definitely NOT true at all. you will, in fact, experience the same amount of lag as double buffering WITHOUT vsync. If you real performance is consistently 45 FPS every frame (each frame takes 22.2ms), in triple buffering and double buffering without vsync with both deliver 45 FPS with the same latency for the start of the displayed image. Average latency will be 1.5 frames. BUT double buffered WITH vsync will only give 30 FPS in this case average latency is 2 frames.

    for triple buffering, lag is distributed the same as double buffering without vsync for the top of the displayed frame (above any tearing).

    The CPU side of rendering for rendering's sake is no longer huge, especially with multicore CPUs. The way a developer handles work between frames won't be hampered on the CPU side by a high framerate unless they have done something wrong.

    I intentionally kept this article simple in order to get the concept across and start talking about the subject. I could have included examples of things like 50 FPS, 45 FPS, and 20 FPS with all three page flipping techniques, but I felt it would just get in the way of itself by making the article unnecessarily longer and more complicated -- and all the examples deliver the same information: that triple buffering is equivalent in lag to double buffering without vsync for the top of the frame and the only time you see significant newer info in a double buffered no vsync situation is after a visible tear.

    Developing /for/ the page flipping method is not the most desirable approach... Unless it's triple buffering :-)
  • billythefisherman - Thursday, July 9, 2009 - link

    Example, your monitor is running at 60fps your graphics card is running at 45fps, as they are not in sync because of triple buffering for 2 out of 3 frames the monitor will be displaying the same frame, at best the user sees 40 new frames per second.

    Ok thats more frames but if your looking at what is arguably more important: the amount of lag between your input being sampled and the results being displayed then you see that your no better off.

    For example lets assume your input on the game side is locked to the GPU which is typically the case in triple buffering or without vsync setup.

    If the GPU is running at a constant 45fps you will see on the first frame 0 lag between the last frame being displayed. The last sample of your analogue input will be lets say for sake of simplicity ~16.667ms ago.

    On the second frame the monitor will display the same frame becuase the GPU has finished rendering and so will be displaying input from ~33.334ms ago ie the frame will be now ~16.667ms old.

    On the third frame the monitor will now display the first new frame rendered since the start which will now be 8.3335ms old (at constant 45fps) ie the input sampled is now ~25.00ms old.

    With double buffered vsync on, your input on frame one will be 16.667ms old and on the second frame it will be 33.334ms old then on the third frame it repeats ie it will be 16.667ms old again etc.

    Multiply this out over a 60fps ie 20*1667+20*33.334+20*25.002=~1500 and 30*16.667+30*33.334=~1500 and as you can see the lag between your input being sampled and it being displayed is on average the same.

    All the game systems such as physics etc running on the CPU will have similar lag time characteristics - you won't see that much difference from frame to frame and now with triple buffering your sampling at uneven periods of time which could give undesirable effects.
  • billythefisherman - Thursday, July 9, 2009 - link

    Sorry correction:

    ...

    On the second frame the monitor will display the same frame becuase the GPU *hasn't* finished rendering

    ...
  • Nighteye2 - Saturday, June 27, 2009 - link

    It looks like Triple Buffering, while delivering good results, also involves a lot of excess rendering of frames that never get displayed.

    Unlike double buffering with vsync, where every rendered frame gets displayed.

    It should be possible to get triple buffer performance with double buffering and vsync - by predicting how long it takes to render a frame (based on render time of the previous frame and a small margin), the computer could delay drawing the next frame instead of starting to draw it immediately. If the rendering of the frame gets finished just in time instead of shortly after the last refresh, it would eliminate the display lag.
  • DerekWilson - Saturday, June 27, 2009 - link

    When framerate is less than 60 FPS, triple buffering doesn't spin off into oblivion doing work no one sees -- it maintains the same performance of double buffering without vsync but avoids tearing. predicting rendering time isn't a viable option at this point for games ...
  • Nighteye2 - Saturday, June 27, 2009 - link

    If it renders 300 frames and only 60 frames get shown, doesn't that mean 240 excess frames have been rendered?

    It would be better to conserve energy and have the GPU run less hot by rendering less frames, while still getting the exact same output on the screen...
  • Scalarscience - Saturday, June 27, 2009 - link

    I'm late into this so I don't know if Derek (or anyone else) will get around to responding, but there's 2 things I thought I might bring up. I'll post the second as a separate post in case actual discussion ensues...

    First, the comments have established the differences between 'render ahead' & Double/Triple buffering in DirectX fairly well. But for the people who are actually trying this, the situation is imo potentially confusing. For instance, does forcing triple buffering+vsync via Rivatuner's utility (for games with no native implementation) still keep the default render-ahead setting (ie, 3 frames?) If so then this indeed is the source of a huge latency penalty.

    Even with games that implement Triple Buffering themselves in DirectX, there seems to be some variance and it would be nice for devs to publish their implementation (Valve?) and how it interacts with the 'render ahead' control panel setting. I always find that for FPS (online or otherwise) setting the render ahead for DirectX to 2 instead of 3 helps the game's 'feel', though I do put it back to the default of 3 for single player games where the eye candy is making my machine struggle (and I'm willing to trade some performance for keeping the graphics cranked up.)

    Now some games will have their OWN 'render ahead' implementation, like UT3 and other Unreal3 engine games. I've had to not only set 'render ahead' to 2 but also dig into UT3's ini and disable it's native 'one frame render queue' setting (or whatever it is.) The last major update did bring that into the GUI settings finally.

    So the question there is how does the DirectX render queue & vsync + double/triple buffering interact? I'm guessing there's at least a few variations in that answer and I would love a discussion or article that begins with the early 3d games (Quake engine, Unreal engine then Source etc) and moves forward in time covering the mainstays in modern FPS games.
  • DerekWilson - Saturday, June 27, 2009 - link

    Let me preface this with: I'm unsure what game developers actually do at this point. If there is enough interest for an article, I'll try and sit down with some game developers and ask them about this.

    But this is what they /should/ do when combining render ahead with triple buffering.

    Start by rendering into the queue. Every vertical refresh, you send the oldest fully completed frame to the front buffer. If you fill up the queue before the next vertical refresh, drop the oldest frame and start rendering another newer one. Continue this until the next vertical refresh comes along.

    The game always renders to whatever buffer is marked current, and front buffer is always swapped with the buffer marked oldest.

    You still end up with a high potential latency of (16.67ms * queue_length) but depending on how the game handles it, this could potentially only happen when frametime >= (16.67ms * queue_lenght) anyway. The minimum latency in this case is longer than without the render ahead queue as well ...

    but there could be some flexibility in maintaining a minimum number of frames in the queue or even keeping it full until frametime severely dips ... there might be some ways to use this to help SLI/CF play nicer with triple buffering as well. Not that multiGPU needs anything to add more potential lag or anything ...
  • Scalarscience - Saturday, June 27, 2009 - link

    ...I'm posting this as a reply but please use the reply button on the previous post unless you're really wanting to reply to this particular post...

    The next thing that I see COMPLETELY neglected in this article and most of the comments--outside of 1 or 2 mentions of higher framerates increasing your movement rate--is that many games will use the engine's currently processing frame for collision & 'hitscan' decisions. My experience/understanding is that this is in addition to the input latency issues covered above, and in fact is MORE important to me than just worrying about visual tearing.

    Hitscan is a term I learned more from the original UT game, I know there was a term CS players used back in the 1.x days too but it slips my mind (and probably more terms now). Basically it applies to projectiles who have 'infinite' velocity, ie, hit their targets immediately when calculated. Hitscan isn't the only type of calculation used for in-game weapons by any means, but the it's the one who (imo) is most affect by latency & discontinuities in the gameplay (caused by bad prediction or perhaps a stalling render thread) so it's the one that is most discussed. My impression is that the hitscan issues apply to other collisions & weapon hits as well, modified by how their weapons work.

    As FPS players are not only concerned with 'what they see' but also what their 'shots' and 'movements' are translating into (especially in competitive online fps games) this is one of the main reasons that players used to force vsync off even if they could get their CRT to do 100hz or higher. Older games do have things directly tied to the render thread (physics, movement speed etc) but as time has gone on that's been reduced and methods of 'prediction' have been added, which means that the discussions get complicated over time. Also some games will use client side detection for certain things (instead of correlating things from the server's perspective to make the final 'decision') which can reduce apparent latency for hitscan style shots
  • Konstantineb - Saturday, June 27, 2009 - link

    Would like to know if anandtech is considering an Arma 2 review with some GPUs performance data.The game is realy good looking, and it deserves a review from anandtech....
  • StarRide - Friday, June 26, 2009 - link

    Since WoW's triple buffering actually says it might cause input lag, I'm beginning to wonder if WoW's triple buffering implementation is actually just 3 frame render ahead...
  • profoundWHALE - Monday, January 19, 2015 - link

    "I'm beginning to wonder if WoW's triple buffering implementation is actually just 3 frame render ahead... "

    Congradulations, you just accurately described a triple-buffer as a "3 frame render ahead", which it is. It's really not that difficult a concept.

    We apply the same concept to a youtube video. We buffer ahead so we get a smooth, steady frame-rate, although in the case of video streaming, we are buffering many, many frames.

    This means that if you have terribly low fps, you'll see a bigger hit to latency with triple-buffering than someone with 100 fps.

    30 fps = 33.3 ms per frame, meaning that you can end up with latency of about 100 ms with triple buffering
    120 fps = 8.33 ms per frame, meaning that you can end up with latency of about 25 ms with triple buffering

    Vsync will continue rendering as usual, but will simple make sure that only an exact multiple is displayed, so if your average fps is about 45 fps and you have a 60 Hz refresh rate, you'll get a steady output of 30 fps. The downside is you'll probably get an effect similar to the 3:2 pulldown where some frames will last as short of a time as 25 ms, and as long as 33 ms.

    This is why things like Gsync and Freesync are awesome, because we can just throw vsync and buffering out the window and instead of trying to sync the input the the output, we sync the output to the input.
  • profoundWHALE - Monday, January 19, 2015 - link

    I should clarify that the real world lag from triple buffer isn't as bad as the numbers I put out there, but because you have vsync, you will have greater input lag so it still gives a good idea of what latencies you could be looking at.
  • MamiyaOtaru - Friday, June 26, 2009 - link

    Triple buffering is obv. better than double, but I'd still not use it.

    In the final set of comparison shots, two thirds of the screen on the double buff without vsync example is newer than the corresponding screen in the triple buffering example. This means tears, yes, but that's newer info you aren't getting until later with triple buffering.

    With double, a mouse movement that starts halfway between screen refreshes can (if the framerate is high enough) show up as soon as a new frame is rendered and swapped to the front to be read by the monitor. Granted, it will only show up on some bottom portion of the screen, with older info above, but in triple buffering that movement will not show up at all until the monitor finishes the current refresh and starts another.

    In the horse example posted, the max time between input and reaction on screen is 3 times greater with triple buffering than with double w/o vsync, even if with the latter that reaction is only shown on the bottom 2/3rds or 1/3rd of the screen. It's *there* and it isn't until later with triple buffering.

    But it's sure as hell better than double buff with vsync :) And visually better than double buff w/o vsync. A nice compromise. But let's not pretend there are precisely 0 drawbacks. If you'd prefer faster reaction time and can accept tearing to get it, you'll not want triple buffering, as that shuts out any new info until the monitor has drawn a completed, now out of date frame.

    It's similar to interlaced vs progressive. Progressive looks nicer, no artifacts. But given the same bits per second, interlaced will have a higher framerate as it isn't drawing the whole frame each time. With double buffering you get bits of several frames rendered onto the screen at once. Artifacts, but you're getting more up to date info *even if that more up to date info isn't on the whole screen*.
  • justniz - Friday, June 26, 2009 - link

    This article makes it sound like triple-buffering is always best, which is just not true.
    If you play games that require fast reactions and you have powerful enough hardware that you dont get tearing on double-buffering then DONT USE TRIPLE BUFFERING.
    Why?
    Because triple buffering adds an extra frame's worth of delay before you see the picture. Its only maybe 1/30 to 1/60th of a second extra, but in twitch/frag games removing that delay may be enough to give you the edge.
  • DerekWilson - Friday, June 26, 2009 - link

    triple buffering does not add an extra frame of delay.

    triple buffering is always the best option, even for twitch shooters where a bad tear can look like something that's actually there and distract the gamer.

    having powerful hardware does not reduce tearing. in fact, at higher framerates there is always a higher chance of tearing than at lower frame rates (at 300 FPS tears will happen every frame, whereas at 40 FPS, tears cannot possibly happen every frame -- the lower the frame rate, the less likely or often tearing occurs).

    you still get the reduce lag "edge" that is apparent in double buffering without vsync, but you get the most recently fully completed frame of animation rather than a composite of multiple frames starting with the one that triple buffering would have given you in full.
  • MamiyaOtaru - Friday, June 26, 2009 - link

    ou get the most recently fully completed frame of animation rather than a composite of multiple frames starting with the one that triple buffering would have given you in full.

    starting with, but ending with newer stuff, which is exactly the point. Triple buffering is offering the most up to date feedback on your input only up until the point a tear would happen in double buffering.
  • DerekWilson - Friday, June 26, 2009 - link

    i did note that this is the only potential advantage of double buffering without vsync, but that in order to actually benefit from it, it needs to be a visible artifact (the frames need to be different enough to cause a visible tear) and you only get the benefit in the percentage of the frame that comes after the tear.

    part of the double buffered frame is the same as the triple buffered frame ... if you need information from this part of the screen (if you're looking there) you still won't get the benefit and the peripheral distraction of tearing can ... well ... distract.

    some people may see this as a real advantage, but i certainly think the downside certainly outweighs any potential upside.
  • vegemeister - Tuesday, August 6, 2013 - link

    >in fact, at higher framerates there is always a higher chance of tearing than at lower frame rates (at 300 FPS tears will happen every frame, whereas at 40 FPS, tears cannot possibly happen every frame -- the lower the frame rate, the less likely or often tearing occurs).

    With vsync off, effectively every rendered frame tears. The only time you render a frame and don't get tearing is if you get lucky and accidentally swap during vblank.
  • Schmide - Friday, June 26, 2009 - link

    Actually if you read PrinceGaz and my discussion.

    When vsinc is on, the rendering of the next frame actually starts immediately after the previous frame and would provide no delay as long as the rendering time was less than the current refresh rate.

    The only real cost is memory.
  • DerekWilson - Friday, June 26, 2009 - link

    When rendering time is more than refresh time, double buffering with vsync can incur up to almost two full frames of lag (~33ms) in addition to the frame time.

    with triple buffering, this will be reduced to at most one frame of lag (~16.7ms) but this is the worst case scenario for triple buffering. average case will absolutely be less than this. average case for double buffering without vsync will be equal to this for the first frame that started being drawn to the screen (before any tear that may or may not happen). average case for double buffering with vsync will always be higher than triple buffering.
  • SonicIce - Friday, June 26, 2009 - link

    if you think vsync with triple buffering has the same performance as double buffering then i feel sorry for you
  • PrinceGaz - Friday, June 26, 2009 - link

    Your explanation of double-buffering and enabling vertical-sync are certainly correct, but your explanation of how triple-buffering works is not how I understand it works (I've read a few articles on such things over the years, yeah... sad).

    I believe triple-buffering is as follows:

    You have three buffers: A (front-buffer), B (back-buffer), C (third-buffer or second back-buffer). In my explanation I'm going to refer to any buffer swapping as copying from one buffer to another; how it is implemented by the hardware is irrelevant.

    Your explanation is correct up until the point that all three buffers have a frame written to them, so A is currently being displayed, B has the next frame to be displayed, and C has just been filled with another frame. At that point, you say C is moved to B, and a new frame starts being rendered into C; in other words the card is constantly rendering frames to the two back-buffers as fast as possible and updating B at every opportunity. So that a recently completed frame is available in B to be moved to A at the vertical-refresh.

    The way I understand triple-buffering works is that once B and C both have frames rendered to them waiting to be displayed, the graphics-card then pauses until the vertical-refresh, at which point B is copied to A to be displayed, C is moved to B, and the card is free to start work on rendering a new frame to fill the now empty C. No frames are thrown away, and the card is not constantly churning out frames which won't be displayed.

    The whole point behind triple-buffering was to prevent framerate slowdowns caused by stalling when using double-buffering with vsync at framerates BELOW the refresh rate, NOT to minimise lag caused by vsync with double-buffering at framerates ABOVE the refresh rate (slight lag when the card was churning out frames faster than the refresh-rate was not seen as a problem with vsync, but big framerate drops like from 55fps to 30fps when it couldn't keep up were a major problem worth fixing).

    It should be noted that there is no difference between the two methods (constantly updating the back-buffers like you say, or stalling once both are filled like I've read elsewhere) at framerates below the refresh-rate as the two back-buffers are never both filled; a frame will always be moved from B to A to be displayed, before the one being drawn to C is completed (which means it can be immediately be moved to B and work continued on another one to fill C).

    The difference is when the framerate is considerably higher than the refresh-rate. In your scenario, when the refresh occurs, the last frame the card has just rendered is displayed. At 100fps, that would be a frame completed no more than 0.01 seconds ago (because thats how quickly the card is churning out frames and pushing them into buffer B), meaning there is negligible lag (between 0 and 0.010 seconds).

    In my scenario, the new frame is one which began rendering two refreshes previous (it was rendered into C very quickly two refreshes back, moved to B at the last refresh, and at this refresh is finally moved to A and displayed). The lag is therefore always exactly two frames provided the card is capable of rendering a frame faster than the refresh-rate. At 60hz refresh the lag will therefore be a constant 0.033 seconds regardless of the framerate the card is capable of (provided it can maintain at least 60fps).

    Whilst the longer lag (0.033 vs 0-0.010) would be a disadvantage in some cases (your best option there is to use double-buffering with no vsync), it is a consistent lag which in most games will feel better. It also means your graphics-card isn't constantly producing frames many of which will never be seen.

    The only problem is I don't know who is right. What I've said happens is what I've read on several other sites over quite a few years. Your article today Derek is the first time I've heard of a triple-buffering which involves the card continually updating the back-buffers.
  • Touche - Friday, June 26, 2009 - link

    I agree. Every site and topic I've read about triple buffering said that it works like you've explained. That's why most people hate it. It does resolve framerate drop issues of DB+vsync, but introduces too much lag. I would really like Anandtech to check this and get back to us.
  • DerekWilson - Saturday, June 27, 2009 - link

    The problem and discrepancy come from the fact that MS implements render ahead in DX, and because the default is 3 frames people took this to be "triple buffering", but you could do 2 frame render ahead and no one is going to call it "double buffering" ...

    It's really a render queue rather than a page flipping method.

    This article describes what, when people are talking about page flipping, "triple buffering" should refer to. This is also the way OpenGL works when triple buffering is enabled.
  • Touche - Sunday, June 28, 2009 - link

    Have you seen this?

    http://msdn.microsoft.com/en-us/library/ms796537.a...">http://msdn.microsoft.com/en-us/library/ms796537.a...
    http://msdn.microsoft.com/en-us/library/ms893104.a...">http://msdn.microsoft.com/en-us/library/ms893104.a...
  • DerekWilson - Wednesday, July 1, 2009 - link

    What they are showing is 1 frame render ahead with vsync. In MS DX terms, this is a flip chain with 2 back buffers and a present interval of one.

    This is them calling it triple if uses three total buffers. This is still a flip queue and should be referred to as such to avoid confusion.
  • DerekWilson - Saturday, June 27, 2009 - link

    Actually, I need to clarify and say that this is my understanding of the way triple buffering with OpenGL works under windows at this time.
  • Schmide - Friday, June 26, 2009 - link

    Nice analysis.

    "In my explanation I'm going to refer to any buffer swapping as copying from one buffer to another; how it is implemented by the hardware is irrelevant."

    I think you have to elaborate on what's a copy and what's a swap as they are very different operations. Copying locks both surfaces preventing the use of both and takes processing power, while swapping locks nothing and takes no processing power.

    In your explanation, you would have very different results if it was a copy or a swap.
  • PrinceGaz - Friday, June 26, 2009 - link

    I understand if the card is actually copying blocks of memory between fixed buffers that it would have a performance impact. Whether it is copying or swapping is irrelevant as to what I was trying to explain, which is why I said how it is implemented by the hardware isn't relevant. It's best just to assume that in reality it is swapping buffers, which is equivalent to an instantaneous copy/move.

    The only thing is I've now realised there is a third way triple-buffering could work, which is roughly halfway between the two methods I proposed; if both B and C have been filled at the vertical-refresh meaning the card has been stalled, B is discarded, C is moved to A to be displayed, and the card now starts rendering to B, then to C. Again that makes no difference when the framerate is below the refresh-rate but it allows the card to render at up to double the refresh-rate to reduce the lag to a minimum of one instead of two refreshes when conditions allow.
  • Schmide - Friday, June 26, 2009 - link

    Ok I get it.

    3 buffers A, B and C all fully renderable. The rendering goes as so:

    A is rendered and queued to become the primary on the refresh after it's completion.

    B begins rendering right after A is completed then queued to become primary the refresh after A or the next refresh after B completes.

    C begins rendering after B finishes rendering and queues as B was queued to A.

    repeat.

    The advantage being, when vsinc is on, instead of starting the rendering of the next surface after the swap, you start a rendering immediately on 3rd surface rather than wait for the current primary to become available after the swap.

    Makes sense.
  • DerekWilson - Saturday, June 27, 2009 - link

    This is a good explanation of render ahead ... it's different than using triple buffering for page flipping.
  • PrinceGaz - Friday, June 26, 2009 - link

    ooops, made a slight error in my lag calculations when framerate can exceed refresh-rate.

    Two frames ago (my method) is correct- 0.033 seconds always because that is how long two refreshes take at 60hz.

    Your method (constant back-buffer updating) will be between one and two frames ago, not zero to one frames like I said, so at 100fps the lag would actually be between 0.010 and 0.020 seconds, not 0 and 0.010 seconds like I said earlier.

    Sorry about that.
  • DerekWilson - Friday, June 26, 2009 - link

    Actually, this is precisely why I wanted to write this article.

    The technique you describe here:

    "The way I understand triple-buffering works is that once B and C both have frames rendered to them waiting to be displayed, the graphics-card then pauses until the vertical-refresh, at which point B is copied to A to be displayed, C is moved to B, and the card is free to start work on rendering a new frame to fill the now empty C. No frames are thrown away, and the card is not constantly churning out frames which won't be displayed."

    While it uses 3 buffers, is actually called render ahead. I'm talking about a page flipping method (which can actually be combined with render-ahead, but that's beyond the scope of this piece).

    The benefit of the DirectX render ahead approach (which can be up to 8 frames iirc), is that it incurs a higher potential latency in order to achieve much smoother action.

    If my framerate fluctuates a lot between high and low rates, it's possible that the game will feel "jerky." But using render ahead, I can essentailly cache up to X (the default is 3) quickly rendered frames so that the next long frametime I hit doesn't cause the monitor to keep showing the exact same image for multiple frames while it waits -- instead, if i have my 3 frames rendered ahead, if the frametime of my next frame is anything up to 50ms, my rendered ahead frames can be spat out, one after the other, sequentially until my 4th frame is ready. if frame rate goes back up after this, then i've successfully smoothed things out.

    the price is, of course, that high framerate means we will see lag because no frames are dropped.

    but this is not triple buffering.

    DirectX does not support actual triple buffering out of the box -- it has to be programmed by the developer. OpenGL does actually support triple buffering inherently as I described.

    I promise that the description I gave in the article is what triple buffering is supposed to be. calling render ahead triple buffering because the default uses 3 buffers has definitely caused a lot of confusion (especially when the wikipedia page cites this as a reason that render ahead is "an implementation" of triple buffering ... which is disingenuous in my mind).

    we don't, afterall, call anything that uses 2 buffers double buffering -- like if I use 2 MRTs while I'm rendering something, am I double buffering then? in the same sense that 3 frame render ahead is triple buffering then sure ... but not if we are talking about page flipping.

    ... so ...

    also, the lag between 0 and 1 frames that i was talking about is lag between the END of rendering the frame and when it is displayed. If frametime is taken into account, the input that generated that frame will have happened, at a maximum of (frametime + 16.67ms) ... so it could be longer ago than just one frame but not /because/ of triple buffering.



  • GreyMulkin - Friday, June 26, 2009 - link

    VSYNC for life!

    This article has convinced me that I only want to use double buffering + vsync. I despise tearing - your example of "only seeing it on fast turns" is disingenuous. When vsync is off, even games with simple graphics tear, everywhere. I find it distracting and undesirable. But I also don't want the input lag associated with being 1 more frame away from the action.

    "While enabling vsync does fix tearing, it also sets the internal framerate of the game to, at most, the refresh rate of the monitor (typically 60Hz for most LCD panels)."

    I see no problem with this. I'd much rather have the game running near a constant framerate than to have it bounce back and forth between high and low performance. Consistency is what I want.

    "This can hurt performance even if the game doesn't run at 60 frames per second as there will still be artificial delays added to effect synchronization. Performance can be cut nearly in half cases where every frame takes just a little longer than 16.67 ms (1/60th of a second). In such a case, frame rate would drop to 30 FPS despite the fact that the game should run at just under 60 FPS."

    Well, if your frames are taking longer than 16.67ms to render, then you're not actually rendering 60 fps. Duh! Longer rendering times mean lower framerates. If you don't like the framerate you're seeing, turn down some settings (resolution, quality, AA, AF, etc). Triple buffering won't fix the problem of render times which are too long for the desired framerate.
  • GreyMulkin - Friday, June 26, 2009 - link

    To elaborate on the "while enabling vsync..." thing. Vsync off usually makes the game try to run as fast (high fps) as possible which is generally recognized as a good thing. But because scenes differ in complexity, the frames rendered per second will vary wildy and that will affect input processing which is usually tied directly to fps. So what I don't want is the *feel* of the game, the smoothness of mouse movements, etc, to be affected.
  • MadMan007 - Friday, June 26, 2009 - link

    Depends upon the game. For online competitive FPS I disable vsync, for single player games or less twitchy ones I enable it. There's no poll option for this.
  • TonyB - Friday, June 26, 2009 - link

    I still use my 21" CRT, i run at 100hz refresh rate with vsync on w/ triple buffering.

    at 100hz, your input lag is only 10ms (1/100) compared to 16.6ms (1/60) not to mention i'm capped at 100 fps instead of 60fps.

    this is why i'm still using CRT instead of LCD.
  • Schmide - Friday, June 26, 2009 - link

    My conceptions.

    Triple Buffering is 2 back buffers that alternate a copying (BLT) to a front buffer(primary/screen) while the other is rendering.

    Double Buffering is two surfaces that trade places between front and back buffer by switching states. Only works in full screen mode.

    Back Buffering where one surface is rendered to then copied to the front buffer(primary/screen). Often falsely called Double Buffering.

    Triple Buffering is designed to avoid the surface lock during a copy to the front buffer (BLT) in windowed mode so the next rendering cycle can start early. In full screen mode it just adds an extra step (BLT) in the rendering cycle, since a hardware is swap moves no memory just pointers.

    I would imagine the only reason a Triple Buffer would reduce tearing is, on average the back buffer copy is playing catchup to the primary surface update and the chances of half rendered frames is a bit less.

    So proper use would be

    Double Buffer - Full Screen Rendering.
    Back Buffer - Simple Full/Windowed Rendering
    Triple Buffer - Complex Windowed Rendering.
  • Schmide - Friday, June 26, 2009 - link

    PrinceGaz explained it so I understand below.
  • Schmide - Friday, June 26, 2009 - link

    -"is"

    I want to add. Vsinc can be a problem because of the synchronous nature between game code and rendered frames. The more frames you get the better your character moves. If you lock down/cap your frames you may be loosing some response.

    Example. In cod4 crash, the wall by the dumpster near the 3 story building, you can only jump over it if your frames get above 125. I assume there is some round off error and Euler like calculations going on.

    The ideal rendering cycle, other than a fixed or capped game play engine, would be: vsinc, update, render a frame, do game code without rendering over and over, repeat.
  • DerekWilson - Friday, June 26, 2009 - link

    triple buffering does not use a blit to move a back buffer to a front buffer -- it is still done with buffer renaming.

    i.e. you'll have three pointers: one to the frame currently being rendered, one to the most recently completed frame (these are both back buffers), and one to the front buffer.

    after a vertical refresh completes, if there is not a more recently completed frame than the current front buffer, the current front buffer locks again and the same frame is drawn. If there is a more recently completed frame newer than the one that was just drawn, then this buffer becomes the front buffer and the old front buffer becomes the other back buffer.

    when the GPU finishes rendering into one back buffer, it marks that buffer as the most recently completed and swaps the pointers so that it's current buffer was the previous most recently completed buffer that is not the front buffer.

    ...

    i know, clear as much right?

    but really, there is no blit involved in a sane triple buffering implementation.
  • nvmarino - Friday, June 26, 2009 - link

    Hey Derek, thanks for the article. Any chance you could provide more detail about the issues with SLI and triple buffering? Such as why it's an issue, can the issues be overcome by game developers or is it an issue at the driver level, and also what are the typical problems an end-user would experience?
  • Compddd - Friday, June 26, 2009 - link

    Or can I turn Vsync off and just leave triple buffering on? Like in L4D or TF2 for instance?
  • DerekWilson - Friday, June 26, 2009 - link

    it is not possible to run triple buffering without vsync.

    the purpose of triple buffering is to provide a buffer that can remain locked during the vertical redraw (so that there is no corruption); this IS vsync.

    but the advantage is that there are still two buffers left over so that you can always save the most recently completed frame while working on the next one (and also not corrupting what is currently being displayed).

    think of it like this: there is one current work space, one most recently completed frame, and one vsync'd buffer.
  • Compddd - Friday, June 26, 2009 - link

    Why do these games like L4D and TF2 have the option to turn off Vync or Triple bufferng then? Or turn them both on, or turn one on and leave the other one off?
  • JonP382 - Saturday, June 27, 2009 - link

    They don't. There's an option to turn on vsync with double buffering, or vsync with triple buffering. Or no vsync.
  • Atechie - Friday, June 26, 2009 - link

    Thanks for showning me why still keeping my 2x21"CRT's are a good choice, so I don't get less IQ, fake black, tearing suckt 60Hz refresh and all the other crap that make LCD's less than steller for gaming.
  • The0ne - Friday, June 26, 2009 - link

    I truly believe this article and the arguments are really for the hardcore gamers. I game myself but rarely do I care for the few minor issues that occur every now and then. If you're not a hardcore gamer it's really not an issue whether you have any of these options on.
  • OblivionLord - Friday, June 26, 2009 - link

    What i would like to see in the tests is multiple passes and some custom test runs ingame using while frabs to capture the framecount.. not just complete synthetic benching using either the ingame timedemo or a custom timedemo. This way things are a bit more realistic to how the benches reflect gameplay performance.

    Also throw in the Min and Max framerates for those that want to know. Not just limit us to the AVG.

    This triple buffering issue is just small fries compared to the overall method of how this site conducts its tests to other sites.
  • ereetos - Friday, June 26, 2009 - link

    In some games, extremely high FPS will distort the physics in games (e.g. quake, call of duty)

    with your video card rendering 125 fps, you can move faster than people running 60fps, shoot faster, and jump further. When you bump that up to 250 fps, you have an increased advantage which is why multiplayer gets capped to 250fps by punkbuster software.

    If you enable triple buffer, but no vsync, will this still be the case? or will the game engine interpret it as a lower frame rate?
  • DerekWilson - Friday, June 26, 2009 - link

    it is impossible to disable vsync and enable triple buffering.

    the point of triple buffering is to allow one buffer to remained locked all the way through a vertical refresh cycle so that there is no corruption while still allowing the game to have two buffers to bounce back and forth between.

    i was unaware of the punkbuster "feature" ... i imagine that since the game would report only 60 FPS with triple buffering, even if you were getting the lag advantage of something like 300 FPS, that it would not be limited in that case.

    but i don't know how punkbuster works, so i could very well be wrong.
  • Dynotaku - Friday, June 26, 2009 - link

    So here's a question. I have a 120hz LCD. I run it with vsync disabled, and in for instance CoD4, I get around 90FPS most of the time. No tearing that I can really detect.

    So with a 120hz monitor, is triple buffering still better or is it a case where it doesn't make that much difference as long as you're getting 60+ FPS?
  • JarredWalton - Friday, June 26, 2009 - link

    If you enabled VSYNC, you'd get 60FPS, while with triple buffering you should get 90FPS still (but with perhaps slightly more latency).
  • Dynotaku - Friday, June 26, 2009 - link

    I guess my question is, is it better to disable vsync or enable triple buffering? It probably doesn't matter much at 90 FPS. I'm running without vsync and I don't see any tearing and the framerate is amazing an really fluid.
  • DerekWilson - Friday, June 26, 2009 - link

    if you have a real 120hz refresh, triple buffering would be even better as there would be no tearing and the maximum additional lag added by triple buffering would be cut in half.

    Running at 90 FPS on a 120 Hz monitor, triple buffering would still be the best option.
  • jp777cmoe - Saturday, July 18, 2009 - link

    with or without vsync on? i have a 120hz monitor.. not sure if i should go no vsync + triple buffering or vsync with triple buffering
  • vegemeister - Tuesday, August 6, 2013 - link

    He would get "90 FPS", but since his monitor is not running at 90Hz, what he would actually see is a ridiculous amount of judder.
  • greylica - Friday, June 26, 2009 - link

    I always use triple buffering in OpenGL apps, and the performance is superb, until Vista/7 cames and crippled my hardware with Vsync enabled by default. This sh*t of hell Microsoft invention crippled my flawless GTX 285 to a mere 1/3 of the performance in OpenGL in the two betas I have tested.

    Thanks to GNU/Linux I have at least one chance to be free of the issue and use my 3D apps with full speed.
  • The0ne - Friday, June 26, 2009 - link

    Love your comment lol
  • JonP382 - Friday, June 26, 2009 - link

    I always avoided triple buffering because it introduced input lag for me. I guess the implementation that ATI and Nvidia have for OpenGL is not the same as this one. Too bad. :(

    I'm going to try triple buffering in L4D and TF2 later today, but I'm just curious if their implementation is the same as the one promoted in this article?
  • DerekWilson - Friday, June 26, 2009 - link

    I haven't spoke with valve, but I suspect their implementation is good and should perform as expected.
  • JonP382 - Friday, June 26, 2009 - link

    Same old story - I get even more input lag on triple buffering than on double buffering. :(
  • JonP382 - Friday, June 26, 2009 - link

    I should say that triple buffering introduced additional lag. Vsync itself introduces an enormous amount of input lag and drives me insane. But I do hate tearing...
  • prophet001 - Friday, June 26, 2009 - link

    one of the best articles i've read on here in a long time. i knew what vsync did as far as degrading performance (only in that it waited for the frame to be complete before displaying) but i never knew how double and triple buffering actually worked. triple buff from here on out

    4.9 out of 5.0 :-D
    (but only b/c nobody gets a 5.0 lol)
    thank you

    Preston
  • danielk - Friday, June 26, 2009 - link

    This was an excellent article!

    While im a gamer, i dont know much about the settings i "should" be running for optimal FPS vs. quality. I've run with vsync on as thats been the only remedy ive found for tearing, but had it set to "always on" in the gfx driver, as i didnt know better.

    Naturally, triple buffering will be on from here on.

    I would love to see more info about the different settings(anti aliasing etc) and their impact on FPS and image quality in future articles.

    Actually, if anyone has a good guide to link, i would appreciate it!


    Regards,
    Daniel
  • DerekWilson - Friday, June 26, 2009 - link

    keep in mind that you can't force triple buffering on in DirectX games from the control panel (yet - hopefully). It works for OpenGL though.

    For DX games, there are utilities out there that can force the option on for most games, but I haven't done indepth testing with these utilities, so I'm not sure on the specifics of how they work/what they do and if it is a good implementation.

    The very best option (as with all other situtions) is to find an in-game setting for triple buffering. Which many developers do not include (but hopefully that trend is changing).
  • psychobriggsy - Friday, June 26, 2009 - link

    I can see the arguments for triple buffering when the rendered frame rate is above the display frame rate. Of course a lot of work is wasted with this method, especially with your 300fps example.

    However I've been drawing out sub-display-rate examples on paper here to match your examples, and it's really not better than VsyncDB apart from the odd frame here and there.

    What appears to be the best solution is for a game to time each frame's rendering (on an ongoing basis) and adjust when it starts rendering the frame so that it finishes rendering just before the Vsync. I will call this "Adaptive Vsync Double Buffering", which uses the previous frame rendering time to work out when to render the next frame so that what is displayed is up to date, but work is reduced.

    In the meantime, lets work on getting 120fps monitors, in terms of the input signal. That would be the best way to reduce input lag in my opinion.
  • DerekWilson - Friday, June 26, 2009 - link

    unfortunately, you really can't build a practical implementation that starts rendering a frame at the point where it will finish just before the next viable refresh. typically, with anything changing at all on screen, you aren't going to have previous frames be good predictors down to the accuracy level you would need.

    I didn't include sub 60 fps or sub 30 fps examples to keep it simple ... but in each case, the frame that starts being drawn at each refresh is equivalent between double buffering with no vsync and triple buffering.

    the "odd frame" here or there really add up when you look at an entire second by the way.
  • velanapontinha - Friday, June 26, 2009 - link

    I always try to play with double buffering + V-Sync. I've known about Tripple Buffering for quite some time, but I still prefer DB+Vsync. It's just that I never felt the theoretical input lag, while I can feel the benefits of having my CPU and GPU rest, instead of beeing always striving to get those useless 100fps.
    60fps (heck, even 30fps), if constant, provide a flawless gaming experience, and if you can have a wonderful gaming experience without your hardware being pointlessly pushed to its limits, why make it render frames you will never miss?
    Less workload, less heat, less noise, less energy, and still an impecable gaming experience.
  • DerekWilson - Friday, June 26, 2009 - link

    there is still benefit at 30 FPS as well and not only when the framerate skyrockets.

    as frametime gets longer, input lag starts to become more and more of an issue. minimizing additional lag (as triple buffering can do) can help more at lower framerates when compared to double buffering and vsync.
  • KikassAssassin - Friday, June 26, 2009 - link

    I just ran a test in WoW (I picked it since it has a Triple Buffering option built-in), where I ran down a path and back again, running the same path three times, once with double buffering and vsync disabled, one with double buffering and vsync enabled, and one with triple buffering. I had RivaTuner open in the background monitoring my CPU and GPU usage.

    In all three tests, the CPU and GPU usage graphs look exactly the same. There's almost no difference between them whatsoever.
  • velanapontinha - Friday, June 26, 2009 - link

    Well, if you can't see any difference, I guess (i'm just guessing) that you're running WoW close to your setup limits, then.

    I'm a beta tester for a software company, and I can assure you that vsync can and will keep your CPU and GPU usage much lower.

    Try running a 3D software that lets your hardware at ease (and thus runs and over 100fps, double buffer without v-sync).
    Then run the same software with v-sync enabled, and you'll see that your hardware has a lot less to struggle for.

    Try this one:
    http://www.theprodukkt.com/downloads/fr-041_debris...">http://www.theprodukkt.com/downloads/fr-041_debris...

    A very small app (177kb) that looks impressive. Run it at a low resolution (say 1024x768, for example), and then check it out. You have v-syn option in the app.
  • velanapontinha - Friday, June 26, 2009 - link

    At least I'm sure you'll notice that CPU usage will be lower. As to GPU, it depends, as GPU load indicators usually are not reliable (always varying at either 0% or 99%)
  • randomname - Friday, June 26, 2009 - link

    I usually start by switching most of the options on in games. After I realize it isn't running fast enough, I start switching some of those options off. Therefore even triple buffering is a "nice to have" property, that I would select (or not) based on an experiment. Unfortunately, that little tryout probably isn't representative of the rest of the game. So often just when it gets really interesting (a lot of stuff and cool effects start happening), the performance plummets. Then you switch off everything that doesn't have an immediate visual impact (maybe triple buffering as well) and try again.

    Absolutely the best part of console gaming is that someone else has made the (artistic) choice of enabling something, and they are in effect saying that your experience is best with these options. The game has been reviewed with those options and the same hardware, and if it sucks, it's the developers fault. The argument doesn't go towards "you really need a fast machine to appreciate the graphics", which leaves questions about how fast is fast enough (to play through the heaviest scenes) and is there any sense in making a several hundred dollar investment to play a fifty buck game, and exactly what options and hardware did the reviewer use? All that tends to take a lot away from the enjoyment and immersion.

    One example is the motion blur in Crysis. It looks really nice and smooths out that FPS-style jerkiness of being able to move your head (optical axis) so fast. But it was also quite a heavy option, and although I really, really didn't want to switch it off, I had to.
  • SleepyGreg - Friday, June 26, 2009 - link

    Having a poll of which buffering method you use under the heading "Triple buffering: Why we love it" is rather flawed. People often answer what they think is the right answer, not what they actually do.
  • DerekWilson - Friday, June 26, 2009 - link

    You know, I agree with you ... I apologize for poisoning the sample. I don't think I'm that great at article titles anyway, but the poll was just something I thought would be a cool idea. I didn't think about how they would impact each other.

    I'll try to be more careful with stuff like this if I do it in the future.
  • Mills - Friday, June 26, 2009 - link

    Seems like nobody here really agrees when it is better.

    Some people say it's better only when your FPS is greater than refresh, some say it's better only when FPS is less than refresh.

    Article seems to make the claim it's always better.

    I remain confused.

  • DerekWilson - Friday, June 26, 2009 - link

    I do make the claim that it's always better, but just wanted to use one example for simplicity sake (the 300 fps example).

    at lower refresh rates, the general case for performance is still the same as double buffering without vsync (which starts rendering the same frame that triple buffering would start rendering) ... and it still has the smoothness and lack of tearing of double buffering with vsync.
  • james jwb - Friday, June 26, 2009 - link

    what about when 120 hz LCD's come out and if a game can provide 120 fps as a minimum. surely double buffering is the best case here, or will triple buffering perform exactly the same in this case?
  • JimmiG - Friday, June 26, 2009 - link

    The reason double buffering still prevails is probably because when current 3D standards were set during the late 90's, video memory was at a premium.

    For example, at 1024x768 (standard resolution at the time), each buffer would take up 1.5 MB at 16bpp and 3MB at 32bpp. Not a lot today, but back then 8-16MB cards were the norm. If a game was designed so that VRAM usage would peak at 16MB, adding a couple MB's usage for another buffer would kill performance. So the general idea was that "Yeah, sure triple buffering is nice, but it uses too much memory", and that idea hs kind of stuck.
  • fiveday - Friday, June 26, 2009 - link

    There are major advantages to using Triple Buffering, but a few points that explain why its not automatically adopted.

    One big one is lag. Now, if things are pretty well lag free under double-buffering, no sweat. However, there's no getting around the fact that by adding an extra frame, you're adding 1/3 extra processing time between the frame being drawn and appearing on your display. If the game's pretty lag-free already, you'll never know the difference. If the game is already prone to some sort of input lag, it's about to get worse. How much worse, depends on the game itself... and in some cases it can drastically soften up your controls. It can be tricky to predict how much impact it will have, if any... a point I'll return to in the conclusion.

    Another issue is memory usage. In a perfect world, every system will always have adequate texture memory accommodate triple buffering. Is it a perfect world? Nope. And if your graphics card is getting thin on ram, get ready for a performance hit. How much? Maybe none, maybe a lot. Which brings me to my last point.

    Whether or not you'll see these adverse affects from using Triple Buffering depends partly on the game itself, how it was written, and partly on your own system configuration. Now, the developers are responsible for their own software, but there's no telling what kind of system a end user is going to try to run the game on. These days, a 4670 graphics card and a phenom X2, while seemingly meager, are enough to get most games out there plenty playable... but there's still folks out there trying to run a game like Bioshock on a Radeon 9700 pro (what's SM2.0, they cry!?!). Lord forbid someone try to use their laptop to play a game.

    By the way... SLI and X-Fire setups tend to HATE triple buffering.

    So you see... the developers have a tough enough time as it is getting their games playable on an extremely unpredictable variety of systems. Triple Buffering, while it has its advantages, simply introduces further risk of poor performance on a lot of systems out there. Should it be automatically enabled? Nope.

    But should it be available as an option? These days, I see no reason why not. The original Unreal and UT engines offered it as an option, and that was ten years ago. Bring it back for those of us who want to take a crack at it.
  • DerekWilson - Friday, June 26, 2009 - link

    you are correct that SLI and CrossFire don't like playing well with triple buffering... but then there have been plenty of oddities no matter what page flipping method we want to use.

    but enabling triple buffering does NOT add an additional latency penalty over double buffering unless double buffering visibly tears and you are talking about the rest of the frame ... double buffering and triple buffering start rendering the same frame every time.

    there is no inserted frame into the pipeline, as it's not a pipeline -- what you are describing is more like DirectX's default 3 frame render ahead which has much higher potential to add latency than triple buffering (when we are talking about the page flipping method and not just "having three buffers").
  • sbuckler - Friday, June 26, 2009 - link

    If tearing is not a problem then you are better off double buffering with vsync off. Turn on triple buffering and you introduce another 16.6ms of display lag which matters in a fast fps.
  • DerekWilson - Friday, June 26, 2009 - link

    you do NOT automatically incur a one frame lag -- you have at most an additional one frame lag.

    as i explained, especially in fast shooters, triple buffering and double buffering with no vsync begin rendering the exact same frame even if double buffering without vsync switches to a newer frame at some point.

    and when tearing doesn't "happen" (read isn't noticable) then that means the updated frames were not different enough to really matter anyway (otherwise you would see the difference).

    the possible advantage of double buffering could be argued when a tear happens near the top of the screen, but whether this is a real advantage is debatable.
  • BJ Eagle - Friday, June 26, 2009 - link

    with double buffer vsync and framrate just below 30 FPS ?

    I would go for double buffer vsync if there is not an equal penalty going below 30 FPS as there is when going below 60 FPS. Simply because I don't want to waste power rendering frames I wont miss...
    Movie industry teaches us that 25-30 FPS is actually enough to fool our brains to perceive motion. But if the lag skyrockets with vsync af below 30 FPS I guess I would go with triplebuffer...
  • nafhan - Friday, June 26, 2009 - link

    With a 60hz refresh rate, I think dropping below 30 will cause the same issues as dropping below 60. Vsync is going to update frames at intervals that divide evenly into the refresh rate. So, if 30 is not an option, then it will update every 20 frames.

    I think most people can tell the difference between 30FPS in a game and 60FPS. However, more than that really doesn't provide much benefit.
  • toyota - Friday, June 26, 2009 - link

    a game is not a movie...
  • velanapontinha - Friday, June 26, 2009 - link

    The eyes and brain that watch a game or a movie are the same. If there was a "Pepsi challenge"-like contest between 30fps, 60fps and 200fps, the error rate would be astronomical, and a lot of overspending gamers would feel bad about spending so much money on hardware that is able to create frames they never see - nor even miss.
  • JS - Friday, June 26, 2009 - link

    The difference is not in the frame rate, but the fact that a film is not a sequence of perfectly sharp static images (like games normally are). Motion blur is automatically introduced by the shutter time on the film camera. That is why 24 fps works for film but not so well for games.

    Most people would definitely see the difference.
  • james jwb - Friday, June 26, 2009 - link

    films also do not require the viewer to make decisions based on what they see. For a movie, fast paced movements in a war scene doesn't require the viewer to see every detail in perfect accuracy and definition, what happens next isn't your choice, you are just watching. In a game, what you see decides what you'll do, and a motion blurred to death fast movement will never suffice in some game genres. You need a compromise somewhere and with games, a higher than film frame rate will significantly help overcome this.
  • BJ Eagle - Saturday, June 27, 2009 - link

    Ahh - good point about the bluring in films...

    But heres another one then:
    In nVidia control panel (vista x64 driver ver 186.18) it clearly states under triple buffering (though only OpenGL is affected as discussed) "Turning on this setting improves performance when Vertical sync is also turned on"...
    This is not quite the impression I got from reading this article. Clearly there is still some confusion of when to enable what settings and having an article like this contradicting nVidias recommendation doesn't really help.. me at least :)
  • profoundWHALE - Monday, January 19, 2015 - link

    I'm just going to leave this here:

    http://www.testufo.com/
  • james jwb - Friday, June 26, 2009 - link

    I can see myself using triple buffering in most situations, but games like CS:S, i don't think it would be wise. For a game like this consistently high frame and refresh rates would be the preferred option. Actually that would be the preferred option for all games, but in order to do this you'd have to delay playing new, graphic intensive games for two years to allow the hardware to catch up.
  • DerekWilson - Friday, June 26, 2009 - link

    i'd still want triple buffering for CS:S ...

    for me, tearing is distracting and i use the top of my display more than the bottom (even if new data were drawn lower on the screen it wouldn't be beneficial to me).
  • james jwb - Friday, June 26, 2009 - link

    ah, see here's a point to consider as to why i said what i said. I use a CRT at 100hz, so the tearing issue becomes almost insignificant. Sure, if I was on an LCD I would agree with youm tearing in CS:S is a disaster in that scenario.
  • JarredWalton - Friday, June 26, 2009 - link

    What I really want is LCDs with a native 120Hz refresh rate and data rate. That last part is key; I want 1920x1200 at 120Hz, not 1920x1200 with 60 images and some funky software interpolating to 120Hz. It would require DisplayPort, dual-link DVI, or HDMI 1.4 (I think?), but with triple buffering that would be the best of all worlds.
  • james jwb - Friday, June 26, 2009 - link

    @ Jarred, i couldn't agree more, but you know that already :)

    If someone like HP can bring a 24" IPS 120hz to market with similar performance to their current model, I'd be in tech-drool heaven. Under this scenario, I'd play CS:S with double buffering, no v-sync, but games that were graphic intensive and could not sustain high frame rates, I'd definitely love the option of triple buffering.
  • profoundWHALE - Monday, January 19, 2015 - link

    You'll need backlight strobing to get CRT-like performance on LCDs. Take a look at http://www.blurbusters.com/
  • texkill - Friday, June 26, 2009 - link

    First, let me sum up the actual advantage of triple buffering: smoothing out variable draw times when game framerate < monitor refresh. That's it.

    This article severely overstates the case for triple buffering when it says "there is an option that combines the best of both worlds with no sacrifice in quality or actual performance." Okay so you want "the best of both worlds" which would be no tearing and minimum input lag? And the example used to prove this is 300 fps on 60hz. Well guess what, I can give you the best of both worlds with something called "waiting a while." See those horse figures at the beginning of each frame in the double-buffer figure? Move them from the beginning of the frame to near the end and viola, input lag is looking good again.

    But actually it gets even better when you add multithreading to a double-buffered solution. Now you not only don't have to draw frames that will *never be seen by any living creature on Earth* (not the default behavior in DirectX btw), you can actually make use of the CPU time that would otherwise be spent in the graphics api to do something useful like physics or AI. You also then don't need to have frames that are drawing when the v-sync happens and causing the input lag and smoothness to vary every single frame (again, not the default DX behavior).

    Triple buffering has its place when drawing times vary and smooth animation is desired. But it should definitely not be blindly demanded of all game developers when most of them already know the tradeoffs and have already made very good judgments on this decision.
  • DerekWilson - Friday, June 26, 2009 - link

    this is more of an additional advantage. without vsync, double buffering still starts drawing the same frame that triple buffering would start drawing but changes frames in between. throw in vsync and you still get a doubling of worst case added input lag (and an increase in average case input lag too).

    and it's not about drawing the frames that will never be seen -- it's about not seeing frames that are outdated when newer frames can be finished before the next refresh (reducing input lag).

    multithreading still helps triple buffering ... i don't see why that even enters into the situation.

    the game can't know for sure how long a frame will take to render when it starts rendering (otherwise it would know how long it could wait to start the process so that the frame is as new as possible before the next refresh). there is no way to avoid having frames that are being worked on during a vertical refresh.
  • JarredWalton - Friday, June 26, 2009 - link

    VSYNC is really the absolutely worst solution to this problem in my opinion. Let's say you have a game that runs at ~75FPS on average on your system, with VSYNC off. Great. Enable triple buffering and you still get 75FPS average, though some frames will never be seen. Use double buffering with VSYNC and you'll render 60FPS... ideally, at least.

    The problem with VSYNC is that you get lower minimum frame rates, and those become very noticeable. If you're running at 60FPS most of the time, then drop to 30FPS or 20FPS or 15FPS (notice how all of those are an even divisor of 60), those lows become even more distracting. Far more common, unfortunately, is that maintaining 60FPS with many games is very difficult, even with high-end hardware. Rather than getting a smooth 60FPS, what you usually end up with is 30FPS.

    Finally, in cases where the frame rate is much higher than the refresh rate, triple buffering does give you reduced image latency relative to double buffering with VSYNC - though as Derek points out it still has a worst case of 16.7ms (lower than double with VSYNC).
  • zulezule - Friday, June 26, 2009 - link

    Your comment made me realize that I'd prefer my GPU to render the 60 vsync-ed frames and stay cool, instead of rendering 300 fps (out of which 4/5 are useless), overheat, become noisy and maybe even crash. The only case when I'd want more frames rendered would be when they are used to insert something in the one visible frame, as for example if the 4 invisible frames are averaged with the visible one to create motion blur. However, I'm pretty sure beautiful motion blur can be obtained much more easily.
  • DerekWilson - Friday, June 26, 2009 - link

    The advantages still exist at a sub 60 FPS level. I just chose 300 FPS to illustrate the idea more easily.

    At less than 60 FPS, the triple buffered case still shows the same performance as double buffering -- they both start rendering the same frame after a refresh. double buffering with vsync still adds more input lag on average than the other cases.
  • Mills - Friday, June 26, 2009 - link

    You made a good case of something currently impossible (if I understand you correctly) being better than triple buffering but I don't see where you made the case that triple buffering isn't better than double buffering in the case of FPS being much greater than refresh rate.

    The point is, when we are given a choice between double and triple, is there a reason not to choose triple?
  • texkill - Friday, June 26, 2009 - link

    What's impossible about it?

    Yes, there are drawback to triple buffering. Implement it the way directX does by default and you get input lag. Implement it the way the article suggests and you get wasted cpu and jerky animation. And either way you are sacrificing video memory that could have been used for something else.
  • DerekWilson - Friday, June 26, 2009 - link

    1) DirectX does not implement triple buffering (render-ahead is not the same and should not be referred to as "triple buffering" when set to 3 frames). The way to think of the DX mess is that they set up a queue to for the back buffer, but there is only one real back buffer and one front buffer (even with 3 frame render ahead, it is essentailly double buffered if we're talking about page flipping).

    2) The triple buffering approach described in this article is the only thing that should actually be called "triple buffering" if we are contrasting it with "double buffering" and referring to page flipping. Additionally, it does not create jerky animation -- the animation will be much smoother than either double buffering with or without vsync (either because frames have less lag or because they don't tear).
  • toyota - Friday, June 26, 2009 - link

    yeah it makes me wonder why both card companies dont even allow it straight from the cp for DX games if there are no drawbacks. also it seems like all game developers would incorporate it in their games if again there were no drawbacks.
  • DerekWilson - Friday, June 26, 2009 - link

    Really, the argument against including the option is more complex ...

    In the past, that extra memory required might not have been worth it -- using that memory on a 128mb card could actually degrade performance because of the additional resource usage. we've really recently gotten beyond this as a reasonable limitation.

    Also, double buffering is often seen as "good enough" and triple buffering doesn't add any additional eye candy. triple buffering is at best only as performant as double buffering. enabling vsync already eliminates tearing. neither of these options requires any extra work and triple buffering (at least under directx) does.

    Developers typically try to spend time on the things that they determine will be most desired by their customers or that will add the largest impact. Some developers have taken the time to start implementing triple buffering.

    but the "drawback" is development time... part of the goal here is to convince developers that it's worth the development time investment.
  • Frumious1 - Friday, June 26, 2009 - link

    ...this sounds fine for games running at over 60FPS - and in fact I don't think there's really that much difference between double and triple buffering in that case; 60FPS is smooth and would be great.

    The thing is, what happens if the frame rate is just under 60 FPS? It seems to me that you'll still get some benefit - i.e. you'd see the 58 FPS - but there's a delay of one frame between when the scene is rendered and when it appears on the display. You neglected to spell out the the maximum "input latency" is entirely dependent on frame rate... though it will never be more than 17ms I don't think.

    I'm not one to state that input lag is a huge issue, provided it stays below around 20ms. I've used some LCDs that have definite lag (Samsung 245T - around 40ms I've read), and it is absolutely distracting even in normal windows use. Add another 17ms for triple buffering... well, I suppose the difference between 40ms and 57ms isn't all that huge, but neither is desirable.
  • GourdFreeMan - Friday, June 26, 2009 - link

    Derek has already mentioned that the additional delay is at most one screen refresh, not exactly one refresh, but let me add two more points. First, the the additional delay will be dependent on the refresh rate of your monitor. If you have a 60 Hz LCD then, yes it will be ~16.6ms. If you have a 120 Hz LCD the additonal delay would be at most ~8.3ms. Second, that if you are running without vsync, the screen you see will be split into two regions -- one region is the newly rendered frame, the other will be the previous frame that will be the same age as the entire frame you would be getting with triple buffering. Running without vsync only reduces your latency if what you are focusing on is in the former.

    Also, we should probably call this display lag, not input lag, as the rate at which input is polled isn't necessarily related to screen refresh (it is for some games like Oblivion and Hitman: Blood Money, however).
  • DerekWilson - Friday, June 26, 2009 - link

    you are right that maximum input latency is very dependent on framerate, but I believe I mentioned that maximum input latency with triple buffering is never more than 16.67ms, while with double buffering and vsync it could potentially climb to an additional 16.67ms due to the fact that the game has to wait to start rendering the next frame. If a frame completes just after a refresh, the game must artificially wait until after the next refresh to start drawing again giving something like an upper limit of input lag as (frametime + 33.3ms).

    With triple buffering, input lag is no longer than double buffering without vsync /for at least part of the frame/ ... This is never going to be longer than (frametime + 16.7ms) in either case.

    triple buffering done correctly does not add more input lag than double buffering in the general case (even when frametime > 17ms) unless/until you have a tear in the double buffered case. and there again, if the frames are similar enough that you don't see a tear, then then there was little need for an update half way through a frame anyway.

    i tried to keep the article as simple as i could, and getting into every situation of where frames finish rendering, how long frames take, and all that can get very messy ... but in the general case, triple buffering still has the advantages.
  • DerekWilson - Friday, June 26, 2009 - link

    sorry, i meant input lag /due/ to triple buffering is never more than 16.67ms ... but the average case is shorter than this.

    total input lag can be longer than this because frame data is based on input when the frame began rendering so when framerate is less than 60FPS, frametime is already more than 16.67ms ... at 30 FPS, frametime is 33.3ms.
  • Edirol - Friday, June 26, 2009 - link

    The wiki article on the subject mentions that it depends on the implementation of triple buffering. Can frames be dropped or not? Also there may be limitations to using triple buffering in SLI setups.
  • DerekWilson - Friday, June 26, 2009 - link

    i'm not a fan of wikipedia's article on the subject ... they refer to the DX 3 frame render ahead as a form of "triple buffering" ... I disagree with the application of the term in this case.

    sure, it's got three things that are buffers, but the implication in the term triple buffering (just like in the term double buffering) when applied to displaying graphics on a monitor is more specific than that.

    just because something has two buffers to do something doesn't mean it uses "double buffering" in the sense that it is meant when talking about drawing to back buffers and swaping to front buffers for display.

    In fact, any game has a lot more than two or three buffers that it uses in it's rendering process.

    The DX 3 frame render ahead can actually be combined with double and triple buffering techniques when things are actually being displayed.

    I get that the wikipedia article is trying to be more "generally" correct in that something that uses three buffers to do anything is "triple buffered" in a sense ... but I submit that the term has a more specific meaning in graphics that has specifically to do with page flipping and how it is handled.
  • StarRide - Friday, June 26, 2009 - link

    Very Informative. WoW is one of those games with inbuilt triple buffering, and the ingame tooltip to the triple buffering option says "may cause slight input lag", which is the reason why I haven't used triple buffering so far. But by this article, this is clearly false, so I will be turning triple buffering on from now on, thanks.
  • Bull Dog - Friday, June 26, 2009 - link

    Now, how do we enable it? And when we enable it, how do we make sure we are getting triple buffering and not double buffering?

    ATI has an option in the CCC to enable triple buffering for OpenGL. What about D3D?
  • gwolfman - Friday, June 26, 2009 - link

    What about nVidia? Do we have to go to the game profile to change this?
  • DerekWilson - Friday, June 26, 2009 - link

    In game options for DX games are what we need to rely on right now, as there is no control panel option in a driver for this.

    It is possible to force triple buffering in some DX games through other means, but what is needed is game and driver developer pressure to get this feature into every game.
  • ukbrainstew - Sunday, June 28, 2009 - link

    You'd be surprised the amount of games D3DOverrider works with, I find compatibility is easily over 90%.

    That developers don't include the option is really rather frustrating, though I just thank the PC community for coming up with a very good workaround as they invariably tend to do.

    Another setting that I'd like to become standard is the ability to choose a framerate cap. Plenty of engines allow it (though its often locked away) and it can work wonders for increasing the smoothness and playability of games on older hardware.

    Even sub $100 parts could maintain a damn near constant 30fps in most games at 720p resolution but they very well may struggle trying to hit 60fps often resulting in wild variances. Would it not be to the benefit of Nvidia and AMD's marketing if they could produce a driver level setting that caps games at half your refresh rate? A setting that would suddenly making their budget parts capable of maintaining a steady framerate in so many more games thus making them much more attractive products.

    Anyway, thanks a lot for the article, triple buffering has been something of particular interest to me as I really can't bear a game with any appreciable amount of tearing and I'd really rather not suffer increased input lag and as much as a 50% reduction in my framerate when a simple setting can do away with it all in one fell swoop.

    Could I suggest a mention of D3DOVerrider in the article? Surely giving readers advice on how to benefit from triple buffering in more games would be a worthy addition and something many may be craving now that they're armed with knowledge of its inherent benefits.
  • erple2 - Sunday, June 28, 2009 - link

    I think the reduction in frame rate is to an even multiple of the maximum frame rate - You'll get 1 (refresh rate of monitor), 1/2, 1/3, 1/4, 1/5, 1/6 etc. and nothing in between with vsync on.

    I've noticed that in games that allow me to show the frame rate, I get exactly 60 FPS (I have an LCD monitor), 30 FPS, 20 FPS, 15 FPS, 12 FPS, or 10FPS (and so on) and nothing in between. But that's the way the vsync operates with double buffering.

    With triple buffering, I can get more or less any FPS rate lower than 60.
  • fiveday - Friday, June 26, 2009 - link

    Yes. D3DOverrider is a utility that (as the name implies) overrides certain D3D calls and forces a few of its own settings. Specifically, it can force Triple Buffering and VSync (on or off) in any Direct 3D application. It comes with RivaTuner, but is a seperate app - you won't find it in RT's settings, but in it's installed folder as a standalone program.

    So yeah - download the latest RivaTuner (which you don't even have to run, tho it's useful!) and use D3DOverrider to force triple buffering in Direct 3D.

    This saved my experience with Dead Space... and I've been singing it's praises ever since.
  • toyota - Friday, June 26, 2009 - link

    I have to go ahead and laugh at the people that turn triple buffering on from the standard control panel and claim they see a difference. that setting has NO effect on DX games and is for OpenGL only. of course you have to use a third party app like rivatuner nhancer to actually force triple buffering on. its nice that some games like L4D have it built right into the game options though so that it is very convenient to enable from within the game without any third party crap to fool with.
  • leexgx - Saturday, June 27, 2009 - link

    Why is there not an Pole option for triple buffer with Vsync on and off or is the pole option ment for Vsync on with triple if so its Not what most of us would of picked

    i allways run the games if i can with triple but No Vsync as the lag is to much


    Vsync on has always made input lag be it 3 buffers or 2
  • Hrel - Monday, June 29, 2009 - link

    pole is supposed to be poll, the way you mean. Confusing the way it is.
  • The0ne - Friday, June 26, 2009 - link

    For now, I just check the games to make sure there's an option for it. If not then I don't bother trying to find a way around it. Derek has it right, developers has to see the benefits and implement it if video card mfger's or MS doesn't implement it.
  • DerekWilson - Friday, June 26, 2009 - link

    This is the reason the last line of our article is focused on the developers.

    They definitely, like Valve, need to start including triple buffering in in-game options.

    And it would be nice if NVIDIA and AMD could build something into the driver to make it work for everything. They put a lot of time into making AA work in most games, why not do the same for triple buffering?
  • GourdFreeMan - Friday, June 26, 2009 - link

    I was under the impression that if you set VSync to "Force On" and Tripple Buffering to "On" in the nVIDIA control panel under the "Global Settings" tab you effectively force triple buffering on for all aplications, except those specifically excluded by their individual profiles. Is this not the case? This option has been available for years... admittedly I have never attempted to capture frames to verify that triple buffering is actually occuring.

    I don't see why this shouldn't work universally for applications -- as far as the application knows the only thing that has changed is the size of the pool of available graphics memory.
  • SirLamer - Friday, June 26, 2009 - link

    It's just because nVidia hasn't designed their control panel to be super invasive. The only way to make it work is to have a program sitting there that intercepts calls from DirectX and changes them.

    Rather than blaming AMD and nVidia, it's probably better to ask why DirectX doesn't include a mechanism to control this performance parameter like it does for most other driver-configurable settings.

    Download Riva Tuner, and from the zip file install D3D Overider. It will sit on your taskbar and do the job. I have used this program in the past, but I forgot to put it back since my last reformat and will now do it tonight. Thanks for the reminder article!
  • GourdFreeMan - Friday, June 26, 2009 - link

    Hmm... it seems you are correct.

    How bizarre! I can understand the usefulness in keeping previous frames for post-processing effects, but you would only be reading from the frames and writing to the new frame, never writing to old ones. Why doesn't this "just work" under the control panel for DirectX like it does for OpenGL?
  • smn198 - Saturday, June 27, 2009 - link

    I suppose we have monitor refresh rates as a legacy from CRT technology. Is there any reason (other than comparability) why we can't have a LCD that refreshes ad-hoc, when both it and the next frame are ready? No more just missing a refresh.

    Alternatively could LCD lie about its refresh rate and have some sort of buffer internally to achieve the same thing - reducing lag?
  • GourdFreeMan - Sunday, June 28, 2009 - link

    All LCDs (both passive and active matrix) still refresh the screen periodically to prevent individual pixel elements from fading, so there is still a notion of refresh rate for LCDs. You do raise a good question of whether it would be possible to refresh the screen whenever frames are completed (which would have to be in addition to this base refresh rate, or you would get a flickering in brightness).

    Having an input buffer to reduce perceived display lag would result in torn frames if you tried to swap in the new frame mid-refresh. You still have to wait until the refresh is completed.
  • erple2 - Tuesday, June 30, 2009 - link

    That doesn't make sense from the perspective of how an LCD works. The charge that twists the polarizing LCD element doesn't fade over time (well, not over the few milliseconds between updates - though the charge probably fades over the years as they wear out).

    The pixel elements don't generate any light themselves. How do they fade then?

    I think that you're confusing 2 things here. The refresh rate of the LCD is tied to the output signal - they're both set to run at 60 Hz, so the video card outputs a "new frame" (even if the frame hasn't changed, it's a new frame) ever 1/60th of a second. The LCD then reads that signal every 1/60th of a second and displays it. Part of the reason they chose 60 Hz is due to the bandwidth limitation of the set standard. To update more frequently than that, you'll clearly need the capability of transmitting more data down the interface. Right now, the DVI interface can transmit up to 3.96 Gigabits of info per second. at 24 bits per pixel, and a 1920x1200 resolution, that's 55,296,000 bits per image. Given the hard cap of 3.96 Gbps, that's 3.96/55296000 * 1billion which is about 71 Hz. That's the fastest a single link DVI interface can refresh at that resolution. I believe it was therefore decided to cap the refresh rate at 60 Hz for any WUXGA resolutions. But, that's out of convenience, not for any reason related to fading pixels (unlike a CRT).

    LCD's don't flicker per se, as there's no light that's turning on and off. The backlight is more or less constantly on.
  • overzealot - Wednesday, July 1, 2009 - link

    As the pixels untwist (no power applied, or power applied in reverse) they transition back to blocking light. You could, theoretically, call the process fading, as that's what it would appear to do.

    The backlights in most LCD's run off AC, you can't say they're always on. Best you can say is that because the frequency is much higher than 50/60hz you can't see the flickering. It's still there.

    There are faster than 60hz panels, it's just that the electronics are more costly - and the majority of people don't care.
    I do care, but not enough to pay the extra cost of a 120hz panel.
    I'd rather have a larger panel.
  • DerekWilson - Wednesday, July 1, 2009 - link

    it is my understanding that pixel state on an LCD panel is driven by a steady voltage applied across the liquid crystal cell (aside from possible overdrive on the upswing to increase transition time). because they are digital, until the controller changes the state of that pixel, it can remain at a constant percentage of twist because there is a constant voltage applied. no refresh is "required" and the bandwidth issue is what drives "refresh rate" on LCD panels.

    many LCDs do use CCFL for backlight which can have a slight flicker for a very very short time period every time current alternates polarity, but it isn't really ever "off" as they are driven both ways (there are no dedicated anodes and cathodes - they switch with current).

    But as we move toward LED backlighting (or away from CCFL and toward other technologies which are DC) then we won't have any flicker at all there either.
  • GourdFreeMan - Monday, August 31, 2009 - link

    This "steady" voltage (only true of active-matrix LCDs) isn't maintained directly by the LCD's power supply. For TFTs there are one or more capacitors gated by a transistor for controlling their voltage per pixel element (R,G,B) that maintains the state. These capacitors slowly discharge and must be refreshed periodically. In this sense all LCDs have a "base refresh rate".

    If you do a Google search for LCD controllers integrated into consumer products you will find there is an issue with perceived flickering in brightness as the pixel elements fade if you do not refresh them often enough.

    I was asked if it was possible to refresh the screen only when frames are completed, and this was the first issue I discovered when researching the question. Other than increased power usage and added controller complexity I do not know if there would be other issues if you tried to do a second "just in time" refresh and left my reply to the original question at that.
  • RSmith - Thursday, April 8, 2010 - link

    Hey GourdFree Man,

    I got here thinking exactly the same thing as you: why do we need a fixed refresh rate on LCD's?

    Did you get any answers to that?

    I hope that future display technologies will allow this to happen, it would certainly be of huge benefit to gaming if frames were drawn as they were rendered.
  • homerdog - Sunday, June 28, 2009 - link

    I can set my LCD to 75Hz, which AFAIK is a lie.
  • AnikaHils - Monday, February 22, 2021 - link

    https://japanesemailorderbride.com/meet-japanese-w... an online dating website that has completely changed my life once and for ever. It has a wide selection of beautiful women eager to find the love of their lives and become eternally happy with a man. So if that is why you won't just create an account and let your love story begin.
  • Marrie Hill - Sunday, November 14, 2021 - link

    Great article. A lot of useful information. For those who have already found everything you need in this article, go to the site https://cherylhearts.net/best-asian-dating-sites-i... and choose your wife. Here you can buy a wife of any nationality and different ages
  • Marrie Hill - Sunday, February 13, 2022 - link

    Great article. We have created a site where you can buy a wife https://top10mailorderbridesites.com/ There are many girls who can draw and not only. Come in, buy a wife and have an unforgettable honeymoon
  • Marrie Hill - Friday, March 18, 2022 - link

    A very unpleasant situation. But when you solve the problem - go to our website https://polsport.tv/ choose a wife and shoot hot videos. A lot of girls of different nationalities and different ages

Log in

Don't have an account? Sign up now