HTPC Aspects : Introduction

Home Theater PC (HTPC) enthusiasts keep close tabs on launch of discrete GPUs which don't need a PCIe power connector. Such cards make it easy to upgrade an old PC with a low-wattage PSU into a multimedia powerhouse. Over the last decade or so, GPUs have implemented HTPC functionalities in response to consumer demand as well as changing / expected market trends. In the beginning, we had hardware acceleration for decode of MPEG-2. This was followed by H.264 / VC-1 acceleration (thanks to the emergence of Blu-rays), HD audio bitstreaming and 3D video support. More recently, we had support for playback and decode of videos in 4K resolution.

4K presents tangible benefits to consumers (unlike 3D), and market adoption is rapidly growing. In many respects, this is similar to how people migrated to 720p and 1080i TV sets when vendors started promoting high definition (HD). We know that these early adopters were stuck with expensive CRT-based TVs when the LCD-based 1080p sets came to the market at very reasonable prices. While there is no 'CRT-to-LCD'-like sea-change in the horizon, the imminent launch of HDMI '2.0' (The HDMI consortium wants to do away with version numbers for reasons known only to them) with 4Kp60 capability and display sinks fully compliant with that standard needs to be kept in mind by end users.

In the near future, it is expected that most of the 4K material reaching consumers will be encoded in H.264. Consumer devices such as the GoPro cameras still record 4K in that codec only. From a HTPC GPU perspective, it is imperative that we have support for 4K H.264 decoding. In fact, most real-time encoding activities would utilize H.264, but, a good HEVC (H.265) encoder would definitely be more efficient in terms of bitrate. The problem is that it is very difficult to make a good HEVC encoder operate in real-time. Archiving content wouldn't be a problem, though. So, it can be expected that content from streaming services / local backup (where the encoding is done offline) will move to HEVC first. A future-proof HTPC GPU would be capable of HEVC decode too.

Where does the Maxwell-based 750Ti stand when the above factors are taken into account? Make no mistake, the NVIDIA GT 640 happens to be our favourite HTPC GPU when 4K-capability is considered an absolute necessity. On paper, the 750Ti appears to be a great candidate to take over the reins from the GT 640. In order to evaluate the HTPC credentials, we put the 750Ti to test against the Zotac GT 640 as well as the Sapphire Radeon HD 7750.

In our HTPC coverage, we first look at GPU support for network streaming services, followed by hardware decoder performance for local file playback. This section also covers madVR. In the third section, we take a look some of the miscellaneous HTPC aspects such as refresh rate accuracy and hardware encoder performance.

The HTPC credentials of the cards were evaluated using the following testbed configuration:

NVIDIA GT 750Ti HTPC Testbed Setup
Processor / GPU Intel Core i7-3770K - 3.50 GHz (Turbo to 3.9 GHz)
NVIDIA GT 750Ti / Zotac GT 640 / Sapphire Radeon HD 7750
Motherboard Asus P8H77-M Pro uATX
OS Drive Seagate Barracuda XT 2 TB
Secondary Drive OCZ Vertex 2 60 GB SSD + Corsair P3 128 GB SSD
Memory G.SKILL ECO Series 4GB (2 x 2GB) SDRAM DDR3 1333 (PC3 10666) F3-10666CL7D-4GBECO CAS 9-9-9-24
Case Antec VERIS Fusion Remote Max
Power Supply Antec TruePower New TP-550 550W
Operating System Windows 8.1 Pro
Display / AVR
Sony KDL46EX720 + Pioneer Elite VSX-32
Acer H243H
Graphics Drivers GeForce v334.69 / Catalyst 14.1 Beta
Softwares CyberLink PowerDVD 13
MPC-HC 1.7.3
madVR 0.87.4

All the three cards were evaluated using the same hardware and software configuration. The Sapphire Radeon HD 7750 has an advantage in the power consumption department thanks to its passive cooling system. Other than that, we are doing apples-to-apples comparison when talking about power consumption numbers for various activities in the next few sections.

Meet The Reference GTX 750 Ti & Zotac GTX 750 Series HTPC Aspects : Network Streaming Performance
Comments Locked

177 Comments

View All Comments

  • RealiBrad - Tuesday, February 18, 2014 - link

    If you were to run the AMD card 10hrs a day with the avg cost of electricity in the US, you would pay around $22 more a year in electricity. The AMD card gives a %19 boost in power for a %24.5 boost in power usage. That means that the Nvidia card is around %5 more efficient. Its nice that they got the power envelope so low, but if you look at the numbers, not huge.

    The biggest factor is the supply coming out of AMD. Unless they start making more cards, the the 750Ti will be the better buy.
  • Homeles - Tuesday, February 18, 2014 - link

    Your comment is very out of touch with reality, in regards to power consumption/efficiency:

    http://www.techpowerup.com/reviews/NVIDIA/GeForce_...

    It is huge.
  • mabellon - Tuesday, February 18, 2014 - link

    Thank you for that link. That's an insane improvement. Can't wait to see 20nm high end Maxwell SKUs.
  • happycamperjack - Wednesday, February 19, 2014 - link

    That's for gaming only, it's compute performance/watt is still horrible compared to AMD though. I wonder when can Nvidia catch up.
  • bexxx - Wednesday, February 19, 2014 - link

    http://media.bestofmicro.com/9/Q/422846/original/L...

    260kh/s at 60 watts is actually very high, that is basically matching 290x in kh/watt ~1000/280watts, and beating out r7 265 or anything... if you only look at kh/watt.
  • ninjaquick - Thursday, February 20, 2014 - link

    To be honest, all nvidia did was increase the granularity of power gating and core states, so in the event of pure burn, the TDP is hit, and the perf will (theoretically) droop.

    The reason the real world benefits from this is simply the way rendering works, under DX11. Commands are fast and simple, so increasing the number of parallel queues allows for faster completion and lower power (Average). So the TDP is right, even if the working wattage per frame is just as high as any other GPU. AMD doesn't have that granularity implemented in GCN yet, though they do have the tech for it.

    I think this is fairly silly, Nvidia is just riding the coat-tails of massive GPU stalling on frame-present.
  • elerick - Tuesday, February 18, 2014 - link

    Since the performance charts have 650TI Boost i looked up the TDP of 140W. When compared to the Maxwell 750TI with 60W TDP I am in awe of the performance per watt. I sincerely hope that the 760/770/780 with 20nm to give the performance a sharper edge but even if they are not it will still give people with older graphics cards more of a reason to finally upgrade since driver performance tuning will start favoring Maxwell over the next few years.
  • Lonyo - Tuesday, February 18, 2014 - link

    The 650TI/TI Boost aren't cards designed to be efficient. They are cut down cards with sections of the GPU disabled. While 2x perf per watt might be somewhat impressive, it's not that impressive given the comparison is made to inefficient cards.
    Comparing it to something like a GTX650 regular, which is a fully enabled GPU, might be more apt of a comparison, and probably wouldn't give the same perf/watt increases.
  • elerick - Tuesday, February 18, 2014 - link

    Thanks, I haven't been following lower end model cards for either camp. I usually buy $200-$300 class cards.
  • bexxx - Thursday, February 20, 2014 - link

    Still just over 1.8x higher perf/watt: http://www.techpowerup.com/reviews/NVIDIA/GeForce_...

Log in

Don't have an account? Sign up now