Intel to Support Hardware Ray Tracing Acceleration on Data Center Xe GPUs
by Ryan Smith on May 1, 2019 9:15 AM ESTIn a blink-and-you’ll-miss-it moment, tucked inside of a larger blog post about announcements relating to this week’s FMX graphics conference, Intel has made its first official comments about hardware ray tracing support for their upcoming Xe GPUs. In short, the company will be offering some form of hardware ray tracing acceleration – but this announcement only covers their data center GPUs.
The announcement itself is truly not much longer, so rather than lead into it I’ll just repost it verbatim.
“I’m pleased to share today that the Intel® Xe architecture roadmap for data center optimized rendering includes ray tracing hardware acceleration support for the Intel® Rendering Framework family of API’s and libraries.”
Past that, there’s little concrete to be said, especially given the high-level nature of the announcement. It doesn’t specify whether ray tracing hardware support will be coming to the first generation of data center Xe GPUs, or if being on the road map means it will be coming in a future generation. Nor does Intel state which much clarity just what hardware acceleration entails. But since it’s specifically “hardware acceleration” instead of merely “support”, I would expect actual hardware for testing ray casting, especially in a high-end product like a data center GPU.
Overall, Intel’s blog post notes that the company will be taking a “holistic platform” approach on ray tracing, tapping both CPUs and GPUs for the task. So while GPUs will be a big part of Intel’s efforts to grow their ray tracing performance, the company will be looking to leverage both those and their traditional CPUs for future ray tracing endeavors. Intel will of course be the new kid on the block as far as GPUs go, so it’s not surprising to see that the company is looking to see how the use of these (and other new processor technologies) can be synergized with CPUs.
Source: Intel
31 Comments
View All Comments
bug77 - Wednesday, May 1, 2019 - link
Makes sense. Gamers a whiny bunch, it's professionals that need RTRT the most.As I have repeatedly said, Nvidia would have been much better if they confined their 1st gen RTRT to Quadro cards. It seems Intel made more sense here (depending on how their implementation works out).
DanNeely - Wednesday, May 1, 2019 - link
At this point I strongly suspect that both ray tracing and DLSSAA fell significantly short from performing as well as NVidia originally had hoped to achieve. If they knew performance was going to end up at its current levels it would've made a lot more sense to limit it to the 2080 Ti/Titan level cards (with maybe a cheaper heavily cut down bad die off label model as a cheaper option for developers) as a tech demo/developer platform and a promise of much wider availability in the next generation of cards when the 7nm die shrink made fitting enough cores in more feasible.Opencg - Wednesday, May 1, 2019 - link
Im not sure how they thought that 1 trace per pixel at 60fps 1080p was ever going to look good.willis936 - Wednesday, May 1, 2019 - link
That sounds like plenty for asteroids :Dmode_13h - Wednesday, May 1, 2019 - link
Of course it doesn't, but their mistake was using Battlefield as a "launch" title.They should've partnered with a couple smaller, indy developers, to make some little games that really showcase RTX' potential. And then include a copy with every RTX card. Then, while people were waiting for content, gamers would at least know what might be looming just over the horizon.
TEAMSWITCHER - Wednesday, May 1, 2019 - link
I disagree. Even with an RTX 2060, you can enable or disable RTX features as a user preference if the title supports it. The implementation of Global Illumination in Metro Exodus is particularly impressive if users want the full atmospheric experience over raw performance.KateH - Wednesday, May 1, 2019 - link
came here to say this.if all that RT cores gave us was reflections, then yeah that would be disappointing. but it turns out regular shader cores can do raytraced reflections just fine and what dedicated hardware gets us is raytraced global illumination and ambient occlusion which is a pretty big deal. I suggest RTX naysayers try Metro with and then without raytracing enabled (actually playing it not just screenshots or youtube analysis) and they will see what the big deal is.
And no, I'm not shilling for NVidia. If anything i'm a bit of a fangirl for AMD (go underdog!) but NV are the ones who have the hardware for raytraced GI right now and as a cinematographer in real life, I have an eye for lighting that extends into the virtual world. And Raytraced lighting is the real deal, even "half-baked first-gen" raytraced lighting. Can't wait to see how far gfx will go in a few years when the hardware gets beefier and more titles start incorporating rt fx
0ldman79 - Wednesday, May 1, 2019 - link
Honestly, given the improvement ray tracing shows in global illumination I'm thinking they're going to drop accuracy in lieu of speed anyway.Even considering next gen will likely double the RT hardware.
Realtime *anything* is a sacrifice of quality over speed. It is just a simulation that is profoundly limited compared to our real world physics and lighting.
They'll figure out how to be accurate where needed and fudge the numbers further when applicable.
Crytek has already done similar otherwise it couldn't run on regular GPU.
mode_13h - Wednesday, May 1, 2019 - link
Whatever they're doing, it can't be realtime global illumination - probably something pre-computed, which means it can't react to lighting changes or other environmental changes.I think gamers often don't appreciate the way that game designers try to work within the constraints of their engine. So, you can have a game that looks good, but perhaps partly because the game designers had one hand tied behind their backs. With those types of constraints loosened, you could see the same or better realism in a more diverse range of environments, settings, lighting, and camera angles.
beginner99 - Thursday, May 2, 2019 - link
I rather think they simply didn't have the money to create 2 separate high-end gpus one with and one without RT. To make money from the large dies they needed to jack up prices and that could only be done with features, even if they are in reality not very useful.