-
As the title implies, I'm aiming to measure the impact of enabling RTRT so we can get some insights into the performance cost on competing architectures. I've chosen Riftbreaker and RE Village as they're newer titles, and AMD-sponsored ones.
There was and is still some hoo-haa and reluctance around comparing the cards in Nvidia sponsored titles, and I can at least in part agree why, So I wanted to give AMD the chance to put their best foot forward here, given the games are optimized and effects (amount of effects, ray counts etc) are chosen and geared for AMD hardware capability.
These two games/sites where I got the figures from have testing methods that appear to give highly repeatable and accurate results, by way of built-in benchmark or highly scripted story events/runs.
Riftbreaker 1440P Ultra - results taken from Wccftech video
Nn RT avg / 1% low RT Ultra avg / 1% low RT Ultra + RT AO avg / 1% low RT Ultra perfomance as a % of rast fps RT Ultra + RT AO perfomance as a % of rast fps % faster at rendering RT Ultra effect % faster at rendering RT Ultra + RT AO effects RX6800XT fps 400 / 340 160 / 116 132 / 103 34-40% 30-33% RX6800XT frametime ms 2.5 / 2.9 ms 6.3 / 8.6 ms 7.6 / 9.7 ms RTX3080 fps 334 / 287 196 / 154 165 / 133 53-59% 46-49% RTX3080 frametime ms 3 / 3.5 ms 5.1 / 6.5 ms 6.1 / 7.5 ms 80-90% 58-70%Resident Evil Village 1440P max from Techpowerup! performance review
Nn RT avg RT on avg RT on perfomance as a % of rast fps % faster at rendering RT effect RX6800XT fps 202.2 fps 95.5 fps 47% RX6800XT frametime ms 4.9 ms 10.5ms RTX3080 fps 175 fps 108.4 fps 62% RTX3080 frametime ms 5.7 ms 9.2ms 60%As you can see from the data, before we enable RT settings, the 6800XT is able to comfortably surpass the RTX3080 by 15-20% @ 1440p across both tested games. This isn't unexpected being AMD-sponsored titles, where the RDNA2 parts are generally able to outpace their normal Ampere competitor. And it must be said well done to AMD at this point, those are some serious optimizations that allow RDNA2 cards to effectively compete 1-2 tiers higher than their usual competitor.
It all turns around when we enable RT, however, and there are different ways to size that up.
In RE Village we see that the RTX3080 retains 62% of the rasterization only average FPS, where the 6800XT can only retain 47%, but the difference between those two numbers relatively speaking doesn't tell the entire story.
Lets look at FPS measured in frame times, the RTX3080 incurs an average 3.5ms render time penalty to render the RT effects, but the 6800XT incurs a 5.6ms render time penalty, leading us to believe that the 3080 can render the RT effects ~60% faster in this scenario.
The story gets even more interesting in Riftbreaker.
we see that the RTX3080 retains 53-59% of the rasterization only average FPS, where the 6800XT can only retain 34-40%. Again looking at frame times, the RTX3080 incurs a 2.1 - 3 ms render time penalty to render the RT effects, but the 6800XT incurs a 3.8 - 5.7 ms render time penalty, leading us to believe that the 3080 can render the RT effects ~80-90% faster than the 6800XT in this scenario.
But it doesn't stop there, when we add yet more RT load into the pipeline, the numbers shift and the gap narrows.
the RTX3080 incurs a 3.1 - 4 ms render time penalty to render both RT effects on top of rasterization only gameplay, but the 6800XT incurs a 4.9 - 6.8 ms penalty, leading us to believe that the 3080 can render both RT effects ~58-70% faster in this scenario.
What are some of the conclusions we can draw?
For at least an RTX3080 vs 6800XT scenario, in these games, the 3080 appears to handle the additional RT workload considerably faster, in the order of 58-90% depending on the game and settings. The obvious caveat being the small sample size of games, however, my conclusions appear to closely match those found by DF when they compared these cards head to head seeking the same sort of answers and testing different games. While the Absolute performance with RT on can look very competitive for the 6800XT relative to the 3080 (in these titles and perhaps others too), that figure does not tell the story as well as showing how many extra milliseconds to your frame time RT can add, and how much rasterization only performance is lost when enabling RT. When increasing the number of effects applied, the lead in RT computation that the 3080 has starts to diminish. To what end? hard to tell, perhaps given enough RT load the 6800XT would eventually overtake the 3080, but both would likely be a slideshow at that point.I'd like to hear any other conclusions/theories people might be willing to draw with this data or other similar comparisons, and if my testing is flawed or omits valuable data or points of interest
-
The reason the 6800xt wins in those games is the "sponsorship" aspect. RT is turned down and limited to maintain performance.
What it comes down to is: When the RT compute takes up most of the frametime, Nvidia has a large performance advantage in framerate. When Rasterization takes up most of the frametime, AMD competes very well and can even beat Nvidia
So if you design a game with just some light RT shadows and a handful of reflected objects with 25% resolution and only 0.5 bounce per pixel etc = RX6000 does exceptionally well. Design the game with more RT shadows and lots of object reflections at 100% resolution and 1 or 2 bounces per pixels = RTX3000 has a large advantage
ID: h72d70nID: h72wqa31 ray/triangle per second per RA is considered optimal for RDNA2.
Acceleration is limited to traversals/intersections and ability of GPU to generate bounding rayboxes for searches to begin within (and limit processing area). Denoising is required after all rays are processed.
Ray tracing processing is handled via FP32 shaders when an intersection is detected on either architecture. If you expand ray/triangle intersection rates AND FP32 processing, as Nvidia has done and AMD will do in RDNA3, you get better RT performance.
It's why MCM GPUs also make sense to push RT performance and increase ray cast density, giving higher quality RT effects.
ID: h74g2yiThe reason the 6800xt wins in those games is the "sponsorship" aspect. RT is turned down and limited to maintain performance.
Let me turn this around and ask: Couldn't one also say that Nvidia titles crank up the RT to further gimp AMD gpus?
Perhaps it was unintentional, but when the Witcher 3 was released it had the first implementation of Nvidia's new hair effects, the tessellation was turned up so high for even the lowest settings that the framerate absolutely tanked on AMD gpus while Nvidia was running fine.
After users figured out how to manually reduce the tessellation levels the performance difference more or less disappeared, and the best part was that the hair effects never needed all that tessellation in the first place, hair looked just as good at 16x tessellation as it did at 64x tessellation. (Maybe better, since the 16x hair ran at 60fps, and the 64x hair ran at 15fps.)
Like, maybe this is a dumb question, but how much ray tracing is necessary to visually improve the quality of an image? Unless I'm mistaken Spider-Man on the PS5 uses ray tracing, but because it only uses RT for reflections in windows the framerate is solid even on AMD hardware.
It seems like first generation ray traced games are going big, they're doing most or all of their illumination with ray tracing, but there's nothing saying that a game has to be 100% ray traced or 100% rasterized, it's totally possible to mix and match the two technologies.
Yeah, the AMD optimization and reduced use of RT may have made the difference in these specific benchmarks, but subjectively speaking do we know if any image quality was lost? If these games had been optimized for Nvidia instead of AMD, would they look better to us? Unpopular opinion here: When I look at Cyberpunk with max ray tracing side by side with maxed out screen space reflections and ambient occlusion, I can't really tell them apart, let alone tell which one is "better." If we, the consumer, don't lose anything by virtue of AMD optimization, and maybe even gain a few frames per second, isn't that a win?
I don't know, I just wonder if Nvidia isn't using ten pounds of ray tracing where ten ounces would do the job.
ID: h751a0sI would not be surprised if Nvidia titles turn up the RT to disadvantage amd like they did with tesselation
ID: h74ih6eIn cyberpunk the raytracing was definitely noticeable, but due to the fact the game performed so poorly in the first place I can't get reasonable fps and image quality with it on. Dlss does not help at all because it's a really poor implementation of dlss.
ID: h72duox/comments/m0upya/there_is_significant_headroom_in_rdna2_raytracing/?utm_medium=android_app&utm_source=share" class="reddit-press-link" target="_blank" rel="noopener">https://www.reddit.com//comments/m0upya/there_is_significant_headroom_in_rdna2_raytracing/?utm_medium=android_app&utm_source=share
Plus in no way does the 6800xt have the raw power potential to be 15-20% faster than a 3080.
So you know you're dealing with a AMD sponsored game.
Nvidia sponsored games sometimes have settings that screw with AMD GPUs but at least you can turn them down or off.
AMD sponsored games just run worse on Nvidia GPUs period. What AC Valhalla (and other such games) does is not reasonable, it's suspicious. And it remains as such even after Nvidia got Rebar.
Godfall uses the generic DXR api to do super simple RT reflections? Let's lock Nvidia out artifcially.
That's how you know you're dealing with a business. Neither of them are your friend. Profits are their friend.
" AC Valhalla (and other such games) does is not reasonable, it's suspicious. And it remains as such even after Nvidia got Rebar."
Judging by power consumption, I suspect this game does not even recognize the "doubling" of cuda cores per SMs... but Turing is not doing much better so idk...
lol
The 6800XT and 6900XT are in their own tier of performance at 1080p.
At 1440p, the performance is even with the 3090 and then the pace falls of considerably at 4K.
So, yes, the 6800XT has the raw power to be 10-20% faster than a 3080 while being 30 or 40% more efficient.
Just take into account consoles use AMD hardware for raytracing. So that is"sponsorship" you talk about will probably be just the "default RT implementation"
Everything on PC uses a default RT implementation. It's either called DirectX Raytracing (DXR) or Vulkan Raytracing (except for a brief moment in time where Quake II RTX used a proprietary Vulkan Raytracing implementation because Vulkan didn't officially have one yet).
I guess you mean the level of quality of raytracing which for many games will be set by their console counterparts.
Which is what he meant by "sponsorship" element. Low quality RT effects and Radeon optimizations.
Luckily/hopefully PC users as usual will be allowed to define that level of quality in the future.
No, the default PC implementation will always be superior in some way, may it be the number of RT objects, or their quality.
The newest Watch Dogs shows the difference between PC and Console very clearly in that regard.
The reason the 6800xt wins in those games is the "sponsorship" aspect
The fact that the comparison is at 1440p plays in AMDs favor as well.
Anything above 1080p plays in favor of Nvidia actually. If rdna2 cards have performance advantage, biggest gap is at 1080p. If I remember tests right. But it's nothing if it's AMD title which messes with Nvidia cards badly so we can sometimes even see 6800 being ahead of 3090.
From DF's video (going from memory here) it seemed like when RT was much lighter or much heavier than rasterisation, AMD came closer to nvidia in RT performance (as measured by extra ms as added to RT, like you're doing, so the light case isn't simply explained by less RT), while when raster and RT time were more similar, nvidia pulled ahead.
I've got two hypotheses as to why that might be the case. Though note that these are completely unsupported by any sort of actual profiling.
First up is nvidia's ray traversal being a dedicated fixed function unit. With light ray tracing, it's mostly idle while shaders are busy. With super heavy ray tracing, it's the bottleneck and shaders aren't fully utilised. With a more balanced mix, both are utilised well. AMD, using shaders for traversal, wouldn't see this utilisation imbalance. Though the lack of fixed function acceleration does still mean they're behind.
Second theory was infinity cache. A workload more balanced between RT and rasterisation will mean more contention over the limited cache size, while if it's (nearly) all raster or RT hits would be more likely.
'much lighter' usually means reflections only.. coherent rays. 'much heavier' ends up being effects with more incoherent rays that tank AMD perf. -- Like you just guessing.
I think one of the lighter examples was for GI (might even have been RE8). Though yeah generally I'd expect coherent rays to be much less taxing on SIMD traversal.
It does make sense from a top down perspective. The ability to do a lot of RT work concurrently is a big advantage for Nvidia, however it's definitely true that if you pass the limit of the amount of RT the card can do then it will choke on that workload and have the shaders waiting.
That said, for 90% of use cases this probably won't happen. On the flip side, the fact that AMD can't do it concurrently is hurting them both in general performance as well as stability, frametimes and bugs, its not exactly an easy task to manage.
As in my other post, I have both a 3060TI and a 6800XT and the 6800XT sits in it's box, RT on it has definitely got better over time but it's still a mess and not something enjoyable to use on it ( and also happens to perform around the same as the 3060Ti in many current RT games )
if you just want a paper weight, I'll trade you my 6700xt for that 6800xt just sitting in a box ... =)
So uh, hi old friend, how much would it take to part from your 6800xt, I couldn't care less about RT and enjoy fluid 120fps plus gaming. Are you taking offers?
Digital Foundry lol
You mean the guys who butchered the FSR review?
How did they butcher it?
I've read enough reviews of the first-gen AMD cards with respect to raytracing (and path tracing) performance to know if that's what you're after, it's best to stick to Nvidia or wait till AMD's second generation (by then Nvidia will be on its third, though).
Quake 2 RTX performance on the AMD cards is particularly bad, worse than 2000 series mid-range Nvidia cards kind of bad. It's pretty sad, because competition is good. I hope AMD can invest some good R&D into ray tracing.
You never know. People doubted AMD would even catch Nvidia in raster performance, but it's safe to say that they not only caught up but surpassed Nvidia in some cases, basically anything below 4K is AMD territory.
I hope they can get close to Nvidia's 3rd gen with their 2nd gen outing, but time will tell.
Keep in mind Nvidia is currently hamstrung by the inferior Samsung "8nm" manufacturing process. If Amphere was on TSMC 7nm like RDNA2 it would likely pull significantly ahead. Not that AMD didn't come up with some impressive innovations like Infinity Cache, but I would say Nvidia still has the architectural advantage.
You have to manage your expectations for RDNA2 ray tracing but I’ve actually been impressed compared to what I expected based on reviews. In quake 2 my (overclocked and raised power limit) 6900xt maintains at least 45fps at 1440p and works up to 60 in certain areas.
It’s enough for what I wanted to get out of ray tracing for the moment which is to turn it on every now and then and go “oh that’s cool”
I’m probably an outlier here but I’m not sure I agree re RT R&D. Is ray tracing really worth all this investment in R&D, hardware and software design and ultimately consumer cost?
Admittedly I mostly sim and haven’t tried most RT games but from what I gather the IQ gained from it doesn’t seem worth all the cost and bother. I’d rather have more raster performance than dedicated RT hardware.
When nvidia released the 2k series my take on RT was that it was just a differentiator for them, a way to lock people into their hardware. AMD diving into RT too just feels like they’re playing nvidia’s game.
Maybe someday RT will be table stakes, with every game using it, and I’ll eat my words. But right now it just seems like an expensive bell or whistle.
RT is the future. Not even a question about it.
Quake 2 rtx was done by nvidia employees so has no optimization for amd gpus in it.
Take the courage to admit that NVIDIA's RT implementation is just substantially faste
" class="reddit-press-subreddit-link" target="_blank" rel="noopener">
rdna2 only accelerate I think two raytrace instruction. If you implementation more toward to that two isa, you can get more out of it.