The HD HQV Tests

The version of HQV Silicon Optix provided for us contains tests for a couple different aspects of HD video decoding: noise reduction, resolution loss, and deinterlacing artifacts (jaggies). We will break down the specifics of each test and talk about what we are looking for. This time around, Silicon Optix's scoring system is broken down with more variability in each test, but we will try to be as objective as possible in our analysis.

Noise Reduction

The first test in the suite is the noise reduction test which is broken down into two parts. Initially, we have an image of a flower that shows large blocks of nearly solid color without much motion. This tests the ability of the video processor to remove spatial noise.



When no noise reduction is applied, we see some static or sparking in the flower and background. Hardware will score higher the more noise it is able to eliminate without introducing any artifacts into the image.

The second test that looks at noise reduction presents us with a scene in motion. It is more difficult to eliminate noise while keeping objects in motion crisp and clear. In this test, we are looking for noise reduction as well as a lack of blurring on the ship.



Scoring for these tests ranges from 0 to 25, with the highest score going to hardware that is able to reduce noise while maintaining a clear image that has no artifacts. While Silicon Optix has stated that scores can range anywhere from 0 to 25, they break down four suggested scores to use as a guide. Here's the breakdown:

25 - The level of noise is noticeably reduced without loss of detail
15 - The level of noise is somewhat reduced and detail is preserved
7 - The level of noise is somewhat reduced but detail is lost
0 - There is no apparent reduction in noise and/or image detail is significantly reduced or artifacts are introduced.

Until we have a better feel for the tests and the variability between hardware, we will stick with only using these delineations.

Video Resolution Loss

After noise reduction, we look at video resolution loss. Resolution loss can occur as a result of deinterlacing, and effectively reduces the amount of information that is displayed. In interlaced HD video, alternating fields display odd and even scanlines of one image. Simple deinterlacing techniques can choose to simply duplicate the data in the first field and toss out the rest of the information, and others will simply average the data in two fields together to create a frame. Both of these techniques have issues that cause artifacts, and both remove detail from the image.

When objects are not in motion, interlaced fields can simply be combined into one frame with no issue, and good hardware should be able to detect whether anything is moving or not and perform the appropriate deinterlacing method. In order to test the ability of hardware to accurately reproduce material from interlaced video in motion, Silicon Optix has included an SMPTE test image at 1920x1080 with a spinning bar over top to force hardware to employ the type of deinterlacing it would use when motion is detected. In the top and bottom left corners of the SMPTE test pattern are boxes that have alternating black and white horizontal lines that are one pixel wide. A high quality deinterlacing algorithm will be able to reproduce these very fine lines, and it is these that we are looking for in our test pattern.

Interestingly, AMD, NVIDIA, and PowerDVD software all fail to adequately reproduce the SMPTE resolution chart. We'll have to show a lower resolution example based on a smaller 512x512 version of the chart, but our comments apply to the full resolution results.



If the hardware averages the interlaced fields, the fine lines will be displayed as a grey block, while if data is thrown out, the block will be either solid black or solid white (depending on which field is left out).

Scoring for this test is an all or nothing 25 or 0 - either the hardware loses resolution or it does not.

Jaggies

A good deinterlacing algorithm should be able to avoid aliasing in diagonal lines apparent in less sophisticated techniques. This test returns from the original standard definition HQV test, and is a good judge of how well hardware is able to handle diagonal lines of varying slope.



Here we want each of the three lines to maintain smoothness while moving back and forth around part of the circle. Scoring is based on a sliding scale between 0 and 20 with suggested breakdowns based on which bars maintain smooth edges. We will again be sticking with a score that matches the suggested options Silicon Optix provides rather than picking numbers in between these values.

20 - All three bars have smooth edges at all times
10 - The top two bars have smooth edges, but the bottom bar does not
5 - Only the top bar has a smooth edge
0 - None of the bars have smooth edges

Film Resolution Loss

This test is nearly the same as the video resolution loss test, and the score breakdown is the same: 25 if it works or 0 if it does not. This time around, interlaced video of the SMPTE test pattern is generated using a telecine process to produce 1080i video at 60 fps from a 24 fps progressive source. Because of the difference in frame rates between video and film, a 3:2 cadence must be used where one frame of film is stretched across 3 interlaced fields and the next frame of film is stretched across 2 fields.

One major advantage of this process is that it is reversible, meaning that less guess work needs to go into properly deinterlacing video produced from a film source. The process of reversing this 3:2 pulldown is called inverse telecine, and can be employed very effectively to produce a progressive image from interlaced media. If this is done correctly, no resolution needs to be lost.

Rather than having a moving bar over top of the test pattern, the image shifts back and forth from left to right and resolution loss can make the image appear to strobe or produce the appearance of vertical lines along the edges of fine lines.

Film Resolution Loss - Stadium Test

The final test is a practical test of film resolution loss, showing what can happen when a film source is not accurately reproduced. In this case, flickering in the stadiums or a moiré pattern can become apparent.



Scoring for this test is another all or nothing score granting the video decoder being tested either a 10 or a 0.

Now that we've gotten familiar with these tests, let's take a look at how AMD and NVIDIA stack up under HD HQV.

Index HD HQV Performance
Comments Locked

27 Comments

View All Comments

  • bigpow - Monday, February 12, 2007 - link

    I'd like to see more results, maybe from xbox 360 hd-dvd & toshiba HD-DVD players before I can be convinced that ATI & NVIDIA totally suck

  • thestain - Sunday, February 11, 2007 - link

    Suggest a redo
  • ianken - Friday, February 9, 2007 - link

    ...I meant that in the context of post processing. FWIW.
  • ianken - Friday, February 9, 2007 - link

    Since every HD DVD and BRD I've seen is authored at 1080p, I don't think 1080i film cadence support is that critical for either next-gen disc format.

    It is critical for HD broadcasts where 1080i content is derrived from telecined film or HD24p content and not flagged, which is very very common on cable and OTA feeds.

    Noise reduction: just say no. It is NOT more important for HD. Noise reduction simply replaces random noise with deterministic noise and reduces true detail, I don't care how much magic is in there. With FUBAR analog cable is can make an unwatchable image moderalty palatable but keep it away from my HD-DVD, BRD content or broadcast HD.

    On my 7800GTX I get film cadence detection and adaptive per-pixel vector deinterlace on 1080i. The problem you're seeing may be with the HD-DVD/decoder app failing to properly talk to the GPU. On XP they need to support proprietary APIs to get anything beyond base VMR deinterlacing, particlarly for HD. With Cyberlink there is even a "PureVideo" option in the menus for this. If they do not support PureVideoHD then you will get none of those advanced features on Nvidia hardware. Not sure what ATI does, but I do belive they only support film cadence and noise reduction on SD content.



  • peternelson - Friday, February 9, 2007 - link

    "Noise can actually be more of a problem on HD video due to the clarity with which it is rendered. While much of the problem with noise could be fixed if movie studios included noise reduction as a post processing step, there isn't much content on which noise reduction is currently performed. This is likely a combination of the cost involved in noise reduction as well as the fact that it hasn't been as necessary in the past. In the meantime, we are left with a viewing experience that might not live up to the expectations of viewers, where a little noise reduction during decoding could have a huge impact on the image quality.

    There are down sides to noise reduction, as it can reduce detail. This is especially true if noise was specifically added to the video for effect. We don't run into this problem often, but it is worth noting. On the whole, noise reduction will improve the clarity of the content, especially with the current trend in Hollywood to ignore the noise issue. "

    > Doing noise reduction at the player is less than ideal. You take noisy content then waste much of your datarate describing noise. The NR should be done as a PRE PROCESSING (as opposed to POST) step prior to feeding the encoder (not post processing as you suggest). Any movie studios making disks without NR are just lazy, and the customer deserves better. Obviously a generous bitrate and efficient encoding standard like mpeg4 are desirable, but you waste the benefit if you don't either noise-reduce it or have substantively no-noise content like CGI animation sequences from Pixar.

    Thus the workflow ought to be Telecine scan data or digital intermediate eg 2K film res into colour correction into pan/scan cropping or aspect ratio conversion scaling (eg cinemascope into 16x9) then into noise reduction (statial and temporal etc) into encoder.

    Done professionally different portions of the movie can be encoded with different processing parameters which kick in at the desired timecodes. These are often hand-optimised for sequences that can benefit from them. Such setups may be called ECL (encoder control lists) rather like EDL (edit decision lists).

    Equipment to do excellent realtime noise reduction in high definition is readily available eg from Snell and Wilcox, and if you can't afford it you should either not be in the encoding business, or should be hiring it for the duration of the job from a broadcast hire supplier. Alternatively NR processing may be a feature of your telecine/datacine capture platform.

    Ideally the encoded streams can be compared with the source material to identify any significant encoding artifacts like noticeable DCT macroblocking. This is basic QA and can be done in software and/or visually/manually.

    If the NR is done by the studio prior to disk mastering, I see no reason to rely on the cheap and nasty NR in the player, and of course using a display capable of the proper bit depth and resolution will avoid quantisation banding and scaling degradation.

    Poor attention to production values is diminishing the experience of what ought to be great content.

    Contrary to your statement, noise reduction ought to have been used at standard definition too by anyone doing encoding professionally for DVDs etc. Even moderately expensive/affordable gear from FOR-A could do NR and colour correction using SDI digital ins and outs (that's if you can't afford the Snell gear LOL). The difference is certainly noticeable even before moving to HD content and bigger screens.

    Not all noise reduction techniques reduce detail, particularly when done at the preprocessing stage. Taking noise out make more bits available for the denoised content to be described in MORE detail for equivalent bitrate. Clever algorithms are able to take out hairs from frames of movie film and replace with what ought to be there from adjacent frames (including using motion vector compensation). At this stage the maximum uncompressed source data is available on which to perform the processing whereas NR in the player suffers from only having the bit-constrained compressed material to recreate from. Other pre-processing might include removing camera shake (eg Snell Shakeout) so that compression bits are not wasted on spurious motion vectors where these are undesired. Genuine pans, zooms etc can be distinguised and still get encoded.

    You rightly point out that video using deliberately added noise as simulation of film grain can be troublesome to encode, but there are several other techniques for making video appear film-like, eg Magic Bullet hardware or software as pioneered by The Orphanage which can do things like alter the gamma curve, and replicate various film lab processes like bleach bypass (like opening sequences of Saving Private Ryan).
  • DerekWilson - Sunday, February 11, 2007 - link

    Thanks for the very informative post.

    I think we've got a bit of a miscommunication though ...

    I'm not referring to post processing as post-encoding -- I'm referring to it as hollywood refers to it -- post-filming ... as in "fix it in post". You and I are referring to the same step in the overall scheme of things: after filming, before encoding.

    It seems a bit odd that I hadn't heard anyone talk about processing from the perspective of the encoding step before, as a brief look around google shows that it is a very common way of talking about handling content pre and post encoding.

    In any event, it may be that studios who don't do noise reduction are just lazy. Of course, you'd be calling most of them lazy if you say that. We agree that the customer deserves better, and currently they aren't getting it. Again, go pick up X-Men 3. Not that I liked the movie, but I certainly would have appreciated better image quality.

    Does your statement "If the NR is done by the studio prior to disk mastering, I see no reason to rely on the cheap and nasty NR in the player" go the other way as well? If studios do not perform noise reduction (or, perhaps, adequate noise reduction) prior to mastering, is NR in the player useful?

    I think it is -- but I do want to be able to turn it on and off at will.
  • Wesleyrpg - Thursday, February 8, 2007 - link

    Read more like an advertisement for silicon optix than an article for Anandtech?

    The future of advertising? Buy an article?
  • JarredWalton - Thursday, February 8, 2007 - link

    Hardly. People email us about all kinds of topics, and one of those has been HD video support. We've don't HQV image quality comparisons before, as have many websites, and it's not too surprising that NVIDIA and ATI decoder quality improved after many of the flaws were pointed out. It appears that there are plenty of flaws with the 1080i decoding now, and I'd bet that in the future it will be dramatically improved. We find the results to be useful - i.e. both ATI and NVIDIA are doing essentially nothing with HD video other than outputting it to the display. Now, readers will know that and maybe we'll see improvements. Not everyone cares about improving HD video quality, but for those that do this is good information to have.
  • Wwhat - Sunday, February 11, 2007 - link

    quote:

    both ATI and NVIDIA are doing essentially nothing with HD video other than outputting it to the display

    Well that's clearly not true, they both try to de-interlace the test shows, it's just not a good effort, so don't make such silly statements.


  • Wesleyrpg - Friday, February 9, 2007 - link

    sorry jarred, i must of woken up on the wrong side of the bed this morning, i didnt mean to take it out on you guys. I love Anandtech, and may of been a bit confused with the article.

    Sorry again

Log in

Don't have an account? Sign up now