RX6800XT vs RTX3080 – Real-time Ray Tracing performance impacts compared in two AMD Sponsored titles

1 : Anonymous2021/07/30 04:03 ID: oudbtc

As the title implies, I'm aiming to measure the impact of enabling RTRT so we can get some insights into the performance cost on competing architectures. I've chosen Riftbreaker and RE Village as they're newer titles, and AMD-sponsored ones.

There was and is still some hoo-haa and reluctance around comparing the cards in Nvidia sponsored titles, and I can at least in part agree why, So I wanted to give AMD the chance to put their best foot forward here, given the games are optimized and effects (amount of effects, ray counts etc) are chosen and geared for AMD hardware capability.

These two games/sites where I got the figures from have testing methods that appear to give highly repeatable and accurate results, by way of built-in benchmark or highly scripted story events/runs.

Riftbreaker 1440P Ultra - results taken from Wccftech video

Nn RT avg / 1% low RT Ultra avg / 1% low RT Ultra + RT AO avg / 1% low RT Ultra perfomance as a % of rast fps RT Ultra + RT AO perfomance as a % of rast fps % faster at rendering RT Ultra effect % faster at rendering RT Ultra + RT AO effects RX6800XT fps 400 / 340 160 / 116 132 / 103 34-40% 30-33% RX6800XT frametime ms 2.5 / 2.9 ms 6.3 / 8.6 ms 7.6 / 9.7 ms RTX3080 fps 334 / 287 196 / 154 165 / 133 53-59% 46-49% RTX3080 frametime ms 3 / 3.5 ms 5.1 / 6.5 ms 6.1 / 7.5 ms 80-90% 58-70%

Resident Evil Village 1440P max from Techpowerup! performance review

Nn RT avg RT on avg RT on perfomance as a % of rast fps % faster at rendering RT effect RX6800XT fps 202.2 fps 95.5 fps 47% RX6800XT frametime ms 4.9 ms 10.5ms RTX3080 fps 175 fps 108.4 fps 62% RTX3080 frametime ms 5.7 ms 9.2ms 60%

As you can see from the data, before we enable RT settings, the 6800XT is able to comfortably surpass the RTX3080 by 15-20% @ 1440p across both tested games. This isn't unexpected being AMD-sponsored titles, where the RDNA2 parts are generally able to outpace their normal Ampere competitor. And it must be said well done to AMD at this point, those are some serious optimizations that allow RDNA2 cards to effectively compete 1-2 tiers higher than their usual competitor.

It all turns around when we enable RT, however, and there are different ways to size that up.

In RE Village we see that the RTX3080 retains 62% of the rasterization only average FPS, where the 6800XT can only retain 47%, but the difference between those two numbers relatively speaking doesn't tell the entire story.

Lets look at FPS measured in frame times, the RTX3080 incurs an average 3.5ms render time penalty to render the RT effects, but the 6800XT incurs a 5.6ms render time penalty, leading us to believe that the 3080 can render the RT effects ~60% faster in this scenario.

The story gets even more interesting in Riftbreaker.

we see that the RTX3080 retains 53-59% of the rasterization only average FPS, where the 6800XT can only retain 34-40%. Again looking at frame times, the RTX3080 incurs a 2.1 - 3 ms render time penalty to render the RT effects, but the 6800XT incurs a 3.8 - 5.7 ms render time penalty, leading us to believe that the 3080 can render the RT effects ~80-90% faster than the 6800XT in this scenario.

But it doesn't stop there, when we add yet more RT load into the pipeline, the numbers shift and the gap narrows.

the RTX3080 incurs a 3.1 - 4 ms render time penalty to render both RT effects on top of rasterization only gameplay, but the 6800XT incurs a 4.9 - 6.8 ms penalty, leading us to believe that the 3080 can render both RT effects ~58-70% faster in this scenario.

What are some of the conclusions we can draw?

For at least an RTX3080 vs 6800XT scenario, in these games, the 3080 appears to handle the additional RT workload considerably faster, in the order of 58-90% depending on the game and settings. The obvious caveat being the small sample size of games, however, my conclusions appear to closely match those found by DF when they compared these cards head to head seeking the same sort of answers and testing different games. While the Absolute performance with RT on can look very competitive for the 6800XT relative to the 3080 (in these titles and perhaps others too), that figure does not tell the story as well as showing how many extra milliseconds to your frame time RT can add, and how much rasterization only performance is lost when enabling RT. When increasing the number of effects applied, the lead in RT computation that the 3080 has starts to diminish. To what end? hard to tell, perhaps given enough RT load the 6800XT would eventually overtake the 3080, but both would likely be a slideshow at that point.

I'd like to hear any other conclusions/theories people might be willing to draw with this data or other similar comparisons, and if my testing is flawed or omits valuable data or points of interest

2 : Anonymous2021/07/30 08:52 ID: h729679

The reason the 6800xt wins in those games is the "sponsorship" aspect. RT is turned down and limited to maintain performance.

What it comes down to is: When the RT compute takes up most of the frametime, Nvidia has a large performance advantage in framerate. When Rasterization takes up most of the frametime, AMD competes very well and can even beat Nvidia

So if you design a game with just some light RT shadows and a handful of reflected objects with 25% resolution and only 0.5 bounce per pixel etc = RX6000 does exceptionally well. Design the game with more RT shadows and lots of object reflections at 100% resolution and 1 or 2 bounces per pixels = RTX3000 has a large advantage

ID: h72d70n

rdna2 only accelerate I think two raytrace instruction. If you implementation more toward to that two isa, you can get more out of it.

ID: h72wqa3

1 ray/triangle per second per RA is considered optimal for RDNA2.

Acceleration is limited to traversals/intersections and ability of GPU to generate bounding rayboxes for searches to begin within (and limit processing area). Denoising is required after all rays are processed.

Ray tracing processing is handled via FP32 shaders when an intersection is detected on either architecture. If you expand ray/triangle intersection rates AND FP32 processing, as Nvidia has done and AMD will do in RDNA3, you get better RT performance.

It's why MCM GPUs also make sense to push RT performance and increase ray cast density, giving higher quality RT effects.

ID: h74g2yi

The reason the 6800xt wins in those games is the "sponsorship" aspect. RT is turned down and limited to maintain performance.

Let me turn this around and ask: Couldn't one also say that Nvidia titles crank up the RT to further gimp AMD gpus?

Perhaps it was unintentional, but when the Witcher 3 was released it had the first implementation of Nvidia's new hair effects, the tessellation was turned up so high for even the lowest settings that the framerate absolutely tanked on AMD gpus while Nvidia was running fine.

After users figured out how to manually reduce the tessellation levels the performance difference more or less disappeared, and the best part was that the hair effects never needed all that tessellation in the first place, hair looked just as good at 16x tessellation as it did at 64x tessellation. (Maybe better, since the 16x hair ran at 60fps, and the 64x hair ran at 15fps.)

Like, maybe this is a dumb question, but how much ray tracing is necessary to visually improve the quality of an image? Unless I'm mistaken Spider-Man on the PS5 uses ray tracing, but because it only uses RT for reflections in windows the framerate is solid even on AMD hardware.

It seems like first generation ray traced games are going big, they're doing most or all of their illumination with ray tracing, but there's nothing saying that a game has to be 100% ray traced or 100% rasterized, it's totally possible to mix and match the two technologies.

Yeah, the AMD optimization and reduced use of RT may have made the difference in these specific benchmarks, but subjectively speaking do we know if any image quality was lost? If these games had been optimized for Nvidia instead of AMD, would they look better to us? Unpopular opinion here: When I look at Cyberpunk with max ray tracing side by side with maxed out screen space reflections and ambient occlusion, I can't really tell them apart, let alone tell which one is "better." If we, the consumer, don't lose anything by virtue of AMD optimization, and maybe even gain a few frames per second, isn't that a win?

I don't know, I just wonder if Nvidia isn't using ten pounds of ray tracing where ten ounces would do the job.

ID: h751a0s

I would not be surprised if Nvidia titles turn up the RT to disadvantage amd like they did with tesselation

ID: h74ih6e

In cyberpunk the raytracing was definitely noticeable, but due to the fact the game performed so poorly in the first place I can't get reasonable fps and image quality with it on. Dlss does not help at all because it's a really poor implementation of dlss.

ID: h72duox

/comments/m0upya/there_is_significant_headroom_in_rdna2_raytracing/?utm_medium=android_app&utm_source=share" class="reddit-press-link" target="_blank" rel="noopener">https://www.reddit.com//comments/m0upya/there_is_significant_headroom_in_rdna2_raytracing/?utm_medium=android_app&utm_source=share

ID: h72ct42

Plus in no way does the 6800xt have the raw power potential to be 15-20% faster than a 3080.

So you know you're dealing with a AMD sponsored game.

Nvidia sponsored games sometimes have settings that screw with AMD GPUs but at least you can turn them down or off.

AMD sponsored games just run worse on Nvidia GPUs period. What AC Valhalla (and other such games) does is not reasonable, it's suspicious. And it remains as such even after Nvidia got Rebar.

Godfall uses the generic DXR api to do super simple RT reflections? Let's lock Nvidia out artifcially.

That's how you know you're dealing with a business. Neither of them are your friend. Profits are their friend.

ID: h72j1ay

" AC Valhalla (and other such games) does is not reasonable, it's suspicious. And it remains as such even after Nvidia got Rebar."

Judging by power consumption, I suspect this game does not even recognize the "doubling" of cuda cores per SMs... but Turing is not doing much better so idk...

ID: h72kqo9


The 6800XT and 6900XT are in their own tier of performance at 1080p.

At 1440p, the performance is even with the 3090 and then the pace falls of considerably at 4K.

So, yes, the 6800XT has the raw power to be 10-20% faster than a 3080 while being 30 or 40% more efficient.

ID: h72afy3

Just take into account consoles use AMD hardware for raytracing. So that is"sponsorship" you talk about will probably be just the "default RT implementation"

ID: h72ddp6

Everything on PC uses a default RT implementation. It's either called DirectX Raytracing (DXR) or Vulkan Raytracing (except for a brief moment in time where Quake II RTX used a proprietary Vulkan Raytracing implementation because Vulkan didn't officially have one yet).

I guess you mean the level of quality of raytracing which for many games will be set by their console counterparts.

Which is what he meant by "sponsorship" element. Low quality RT effects and Radeon optimizations.

Luckily/hopefully PC users as usual will be allowed to define that level of quality in the future.

ID: h72k6fb

No, the default PC implementation will always be superior in some way, may it be the number of RT objects, or their quality.

The newest Watch Dogs shows the difference between PC and Console very clearly in that regard.

ID: h72e18o

The reason the 6800xt wins in those games is the "sponsorship" aspect

The fact that the comparison is at 1440p plays in AMDs favor as well.

ID: h72k4gv

Anything above 1080p plays in favor of Nvidia actually. If rdna2 cards have performance advantage, biggest gap is at 1080p. If I remember tests right. But it's nothing if it's AMD title which messes with Nvidia cards badly so we can sometimes even see 6800 being ahead of 3090.

3 : Anonymous2021/07/30 04:36 ID: h71pbkl

From DF's video (going from memory here) it seemed like when RT was much lighter or much heavier than rasterisation, AMD came closer to nvidia in RT performance (as measured by extra ms as added to RT, like you're doing, so the light case isn't simply explained by less RT), while when raster and RT time were more similar, nvidia pulled ahead.

I've got two hypotheses as to why that might be the case. Though note that these are completely unsupported by any sort of actual profiling.

First up is nvidia's ray traversal being a dedicated fixed function unit. With light ray tracing, it's mostly idle while shaders are busy. With super heavy ray tracing, it's the bottleneck and shaders aren't fully utilised. With a more balanced mix, both are utilised well. AMD, using shaders for traversal, wouldn't see this utilisation imbalance. Though the lack of fixed function acceleration does still mean they're behind.

Second theory was infinity cache. A workload more balanced between RT and rasterisation will mean more contention over the limited cache size, while if it's (nearly) all raster or RT hits would be more likely.

ID: h71qhbq

'much lighter' usually means reflections only.. coherent rays. 'much heavier' ends up being effects with more incoherent rays that tank AMD perf. -- Like you just guessing.

ID: h71u9yk

I think one of the lighter examples was for GI (might even have been RE8). Though yeah generally I'd expect coherent rays to be much less taxing on SIMD traversal.

ID: h7227d9

It does make sense from a top down perspective. The ability to do a lot of RT work concurrently is a big advantage for Nvidia, however it's definitely true that if you pass the limit of the amount of RT the card can do then it will choke on that workload and have the shaders waiting.

That said, for 90% of use cases this probably won't happen. On the flip side, the fact that AMD can't do it concurrently is hurting them both in general performance as well as stability, frametimes and bugs, its not exactly an easy task to manage.

As in my other post, I have both a 3060TI and a 6800XT and the 6800XT sits in it's box, RT on it has definitely got better over time but it's still a mess and not something enjoyable to use on it ( and also happens to perform around the same as the 3060Ti in many current RT games )

ID: h72luhb

if you just want a paper weight, I'll trade you my 6700xt for that 6800xt just sitting in a box ... =)

ID: h728iyo

So uh, hi old friend, how much would it take to part from your 6800xt, I couldn't care less about RT and enjoy fluid 120fps plus gaming. Are you taking offers?

ID: h7283nj

Digital Foundry lol

You mean the guys who butchered the FSR review?

ID: h72e07j

How did they butcher it?

4 : Anonymous2021/07/30 08:56 ID: h729gb2

I've read enough reviews of the first-gen AMD cards with respect to raytracing (and path tracing) performance to know if that's what you're after, it's best to stick to Nvidia or wait till AMD's second generation (by then Nvidia will be on its third, though).

Quake 2 RTX performance on the AMD cards is particularly bad, worse than 2000 series mid-range Nvidia cards kind of bad. It's pretty sad, because competition is good. I hope AMD can invest some good R&D into ray tracing.

ID: h72pqw5

You never know. People doubted AMD would even catch Nvidia in raster performance, but it's safe to say that they not only caught up but surpassed Nvidia in some cases, basically anything below 4K is AMD territory.

I hope they can get close to Nvidia's 3rd gen with their 2nd gen outing, but time will tell.

ID: h72ri5e

Keep in mind Nvidia is currently hamstrung by the inferior Samsung "8nm" manufacturing process. If Amphere was on TSMC 7nm like RDNA2 it would likely pull significantly ahead. Not that AMD didn't come up with some impressive innovations like Infinity Cache, but I would say Nvidia still has the architectural advantage.

ID: h72xjqp

You have to manage your expectations for RDNA2 ray tracing but I’ve actually been impressed compared to what I expected based on reviews. In quake 2 my (overclocked and raised power limit) 6900xt maintains at least 45fps at 1440p and works up to 60 in certain areas.

It’s enough for what I wanted to get out of ray tracing for the moment which is to turn it on every now and then and go “oh that’s cool”

ID: h73hegz

I’m probably an outlier here but I’m not sure I agree re RT R&D. Is ray tracing really worth all this investment in R&D, hardware and software design and ultimately consumer cost?

Admittedly I mostly sim and haven’t tried most RT games but from what I gather the IQ gained from it doesn’t seem worth all the cost and bother. I’d rather have more raster performance than dedicated RT hardware.

When nvidia released the 2k series my take on RT was that it was just a differentiator for them, a way to lock people into their hardware. AMD diving into RT too just feels like they’re playing nvidia’s game.

Maybe someday RT will be table stakes, with every game using it, and I’ll eat my words. But right now it just seems like an expensive bell or whistle.

ID: h73o0yf

RT is the future. Not even a question about it.

ID: h729sus

Quake 2 rtx was done by nvidia employees so has no optimization for amd gpus in it.

ID: h72jote
Quake 2 RTX was developed by an independent company, Lightspeed Studios™ Quake 2 RTX uses standard Vulkan RT extensions and has no optimizations for NVIDIA hardware. Quake 2 RTX is open source ( ): AMD could have optimized it if they really wanted/needed. With no optimization from AMD it sounds more likely that NVIDIA's RTRT implementation is just faste" class="reddit-press-subreddit-link" target="_blank" rel="noopener">
and AMD has nothing to offer to fix performance issues.

Take the courage to admit that NVIDIA's RT implementation is just substantially faste

" class="reddit-press-subreddit-link" target="_blank" rel="noopener">

ID: h72hli8

Quake 2 RTX has been updated with the latest Vulkan RT APIs which also follow AMD guidelines for better RT on RDNA2 GPUs.

There isn't really a Nvidia bias anymore.

ID: h729z7k

That's not a problem because it's open source. You can always create an AMD-optimized fork yourself.

5 : Anonymous2021/07/30 21:57 ID: h74vhin

Been playing Cyberpunk with RT on and off with my RTX 3080. To be honest I do not really see the point or difference from a Gamer's perspective, some reflections become...shinier, great.

RT seems more like an 3D Animator's Filter setting that they would like and "appreciate" more than a Gamer like me.

This felt like an Audiophile telling you differences between a $100 pair of headphones versus a $1000 pair. As someone that has both perspectives available to him. I do not see the point from a Casual Music listener's perspective, plus admit it, most people watch youtube and stream media more than the local high quality stuff 😀

I have a 6900XT as well in the other pc. Not sure I see the point in testing the differences, should I?

Kudos to OP for just doing this benchmark for OUR benefit. thanks, cool stuff.

6 : Anonymous2021/07/30 15:03 ID: h7392zr

If only we had the new Ratchet and Clank on PC to compare 🙂

7 : Anonymous2021/07/30 11:47 ID: h72m71j

Download Quake II RTX, max everything, test, done. That's the purest RTRT test on a functioning game you can do, there is no complex geometry to process, no big levels to load, it's all about samples and bounces, you won't have to guess, every card will have its hands full on pure RTRT.

ID: h73qy3q

Quake II RTX was created by Nvidia engineers, they didn't spend time to optimize the code for AMD at all, they just changed the code from their specific implementation to vulkan extensions.

Quake II RTX is NVIDIA's attempt at implementing a fully functional version of Id Software's 1997 hit game Quake II with RTX path-traced global illumination.

Quake II RTX builds upon the Q2VKPT branch of the Quake II open source engine. Q2VKPT was created by former NVIDIA intern Christoph Schied, a Ph.D. student at the Karlsruhe Institute of Technology in Germany.

Both the original branch and the RTX one were done by NV employees.

Just because its open source doesn't mean its optimized properly.

The TL;DR first: There is significant headroom in RDNA2 Raytracing with efficient coding. I was able to increase the performance of my 6800XT in

by 19% with some small code changes (PR47).

These can be summed up as switching to wave32 and reducing VGPRs.

ID: h74bt1k

I know it was developed by Nvidia (unlike Q2VKPT which was made by Christof Schied, who is not an Nvidia employee, he's just done an internship there, he's a researcher as far as I understand), but unlike AAA games it doesn't occupy the card on heavy rasterization, all you get to see is the raw RTRT performance, hybrid games can have multiple bottlenecks that have nothing to do with RT computations, the same reason why I didn't cite Minecraft as, perhaps, loading the chunks can result in bottlenecks, Quake II is small and practically irrelevant in that aspect, which is why I believe it's the best candidate to see RT performance in a functioning game.

Plus, it is open source, you can't blame Nvidia if no one from AMD spent some time on it to optimize it for RDNA2, assuming there are optimizations to be done, the code is there on GitHub for everyone to tinker with and it's been almost a year since RDNA2 came out, I've seen some people proposing changes to fix the performance and bugs on AMD hardware and APanteleev seemed to be quite happy to implement them, so if someone had proposed other optimizations I'm sure they would've been integrated.

Also, 3D packages show what kind of difference RT cores can make, if

can bring the same "simple" optimizations for those, thousands of 3D artists who use AMD will build him a monument in a city of their choosing, because it's my experience that offline RT is so much faster with RT Cores acceleration, it's almost unbelievable (for reference, a 2060 rendering with RT accelerated Optix can be faster than a Titan RTX rendering with good old CUDA), and it'd be cool if people using AMD cards could experience the same. Blender would be a great candidate, I think they're ditching OpenCL and going with Vulkan.

8 : Anonymous2021/07/30 04:34 ID: h71p3bx

That is an incredible analysis.

Thank you so much for breaking down the numbers from a real world perspective using an actual game. Then extrapolating on the perceived numerical affect vs with other features a user is likely to also want to enable at the same time while actually gaming.

ID: h71pbjz

My pleasure, I love a good deep dive into the tech, the numbers, and what it all means. But by no means do I consider my testing and conclusions the be-all, I hope there are interesting discussions and perhaps more testing / other ways to slice it that come out of the comment section.

ID: h71q5zi

It is the practical implementation that stands out most to me. I enable and disable features in games based on the visual affect coupled with thermal levels and fan noise. Rarely is only one of the features enabled. For the most part I disabled features due to fan noise.

9 : Anonymous2021/07/30 07:03 ID: h721kic

From what I've seen there appears to be relatively little 'optimisation' for AMD around RT, while there is some around and things have improved it still appears that developers are taking the route of simply simplifying the RT or reducing it.

RDNA seems to be very good at raster but quite bad at RT, therefore like in the case of RE:Village where the RT is rendering at 1080p no matter the input resolution while also being quite light in regards to lighting it ends up favouring the AMD cards quite well.

Being someone that enjoys maxing out my games and likes to turn RT on where possible I have both a 6800XT and a 3060Ti, the 6800XT is in it's box not being used, that should tell the story.

Another area is stability/matureness. In my experience RT is still much more buggy on AMD in comparison. It's all well and good doing performance comparisons but they're of little use if in reality there are still many scenarios where things either routinely break or become unstable.

10 : Anonymous2021/07/30 04:30 ID: h71oocp

Sorry if I missed it, but were these tests done with no DLSS or FSR? I am on my phone and I may have missed it. Sorry again if I did.

ID: h71os7i

No DLSS/FSR of any kind, just native 1440P with and without Ray Traced effects and measuring the differences to draw some conclusions.

11 : Anonymous2021/07/30 12:46 ID: h72s0l7

Don't a few games actually use separate RT settings on AMD compared to Nvidia? I recall reading Watch Dogs Legion used toned down settings on the RDNA2 cards with no way to adjust them in the menu. That would certainly make benchmark comparisons more challenging.

ID: h73j4s4

I would have thought some kind of dynamic RT quality system would be the norm - establish a frame time target and run as much RT as you can in that time so that IQ is what varies based on hardware rather than framerate. But my knowledge of the pipeline is limited so maybe this is misguided.

12 : Anonymous2021/07/30 21:25 ID: h74rci9

For me it's a moot point. Every time I tried RT on AMD hardware, it either crashed the game to desktop or crashed within a few minutes of playing it.

Crashed the same on the 6800XT and 6900XT. Frequently to desktop. This was the last straw so I eventually sold out and went green after it kept crashing in my favorite games.

13 : Anonymous2021/07/30 12:07 ID: h72nz8u

Non-RT titles, the 6800xt always was a step ahead of 3080.. And RT wise, the 3080 will always be ahead of 6800xt, even 6900xt, thats a given, no benchmark required. RT cores do what they do best.

Fortunately, i had both cards, stayed with amd for the raster performance.

Next generation we will see what Nvidia brings to the table in terms of more RT pure performance without Dlss tricks.

ID: h72rwk3

TPU actually has the 3080 3% ahead of the 6800XT on average, but it depends on what games you play since there are a lot of outliers that swing one way or the other. Definitely the most competitive the two companies have been since the GCN 1.0 days.

14 : Anonymous2021/07/30 15:19 ID: h73b9qn

Good read, thank you

16 : Anonymous2021/07/30 19:52 ID: h74eb2v

Am I reading this wrong, or are those numbers actually pretty good?

17 : Anonymous2021/07/30 23:18 ID: h755dv7

As a layman it seems Nvidia has fixed function RT that does a good job at balancing heavy loads(given the expected performance hit anyway). AMD uses a more general purpose approach that relies on lower precision, and less simultaneous effects. The games where AMD leverages large rasterization leads are ultimately using simpler effects to incur less performance loss.


Notify of
Inline Feedbacks
View all comments
Would love your thoughts, please comment.x