AMD vs. Intel GeForce RTX 3080 Benchmark

For today's article, we're going to revisit many benchmark charts filled with RTX 3080 data, although this one will be a little different. While we are benchmarking the new GeForce RTX 3080, this will not be the main focus of our attention; instead, we will dig deeper into CPU performance.

We have now spent a lot of time and energy investigating which flagship CPU to test the new GeForce 30 series with. The two most important options are Intel's Core i9-10900K and AMD's Ryzen 9 3950X.

We originally considered the 3950X for two main reasons: first, it supports PCIe 4.0, and second, Ryzen seems like a more relevant choice for our readers in late 2020. After reading our reviews, the vast majority of our listeners either already own a Ryzen processor or would like to buy a Ryzen processor through Intel. Our CPU purchase recommendations also follow this pattern pretty consistently.

Yet we know without a doubt that Intel remains king when it comes to ultimate gaming performance. This is why over the past few years, every time we've updated our Best CPU guide, we've recommended an Intel processor for those looking for maximum frame rates.

We didn't expect this to change with a high-end Ampere GPU that supports PCIe 4.0, and we said so back then. We also posted a poll asking you about this CPU choice. An overwhelming majority favored the 3950X. That was 83% of the 62,000 voters. However, because we are going for the scientific rather than the most practical or popular method, we ran some extensive testing on most of the games we would use to benchmark the RTX 3080.

What we found was that at 4K with an RTX 2080 Ti, the 3950X and 10900K performed the same overall, while the AMD processor was only 2% slower at 1440p and 5% slower at 1080p. Since the RTX 3080 is a $ 700 GPU and the RTX 3090 will cost almost $ 1500, we didn't want to test it out at 1080p. With such a small margin at 1440p and no difference at 4K, the choice of processor was mostly not really important from a performance point of view.

The RTX 3080 was supposed to be faster than the 2080 Ti, but if it wasn't over 100% faster, the margin at 4K wouldn't change. We were confident that the 3950X wouldn't skew results enough that we'd see things like our cost per frame and average performance data at odds with other media, and as we now know we didn't.

Had we tested the RTX 3080 at 1080p, the R9 3950X might have been a problem depending on the games used, as we've found in the past that processors like the 3900X can be up to 10-15% slower than competing Intel parts . With that in mind, we're amazed at how many people asked for 1080p benchmarks for the RTX 3080, as this is the most popular resolution in the Steam Hardware survey.

The reason 1080p is so popular is because it's an entry-level resolution and the cheapest monitors are 1080p. It's the same reason the GTX 1060, 1050 Ti, and 1050 are by far the most popular graphics cards, not because they're the best, but because they're cheap and Nvidia has incredible branding strength as well, but that's another conversation.

For the same reason that we're not testing CPU performance with a GTX 1060, we won't be testing an extremely high-end GPU at 1080p, a resolution that is currently almost exclusively reserved for budget games. As mentioned in previous articles on this topic, it shouldn't matter which CPU checkers are used to test the RTX 3080 as it will almost certainly be the case at 1440p and 4K.

That being said, there are a few titles that are giving Ryzen some problems even at 1440p, and we're going to look at that today. Also included in this article are 1080p results as they weren't included in our first RTX 3080 review. Both AMD and Intel CPUs were tested with four 8 GB DDR4-3200 CL14 memory modules that ran in a two-channel, dual-rank mode. We'll discuss memory choices towards the end of the article. First, let's look at the blue bar graphs.

Benchmarks

Starting with the Death Stranding results we see that the 3950X when combined with the RTX 3080 is 7% slower than the 10900K at 1440p, although it is very important to note that it was 7% slower with the RTX 2080 Ti too, so that the edges are not does not change.

We see something similar with 1080p: the 3950X was 12% slower than the 10900K with the 3080 and 11% slower with the 2080 Ti.

When we switch to 4K we see practically no difference between the two CPUs, the 3950X is 3% slower with the RTX 3080 and we are only talking about 3 fps which doesn't affect the bottom line or our conclusion in any meaningful way. For a more scientific approach, however, it makes more sense to use the 10900K here.

A really bad case for Ryzen is Microsoft's 2020 flight simulator. Oddly enough, this brand new next-gen simulator uses DirectX 11 and for the most part is heavily reliant on individual thread performance. The game is expected to receive DX12 support in the near future. If so, it could affect CPU performance. However, for now, the CPU is the primary bottleneck at 1440p and below.

Here the 3950X with the RTX 3080 was 20% slower than the 10900K at 1440p and with the 2080 Ti 10% slower. Margins this size can be a real problem for our data, and the only reason we got away with it was because we tested 14 games. Had we used a smaller sample of 6-8 games, larger margins like this in a single title would have been problematic.

On the positive side, the CPU bottleneck is completely eliminated once the 4K resolution has been reached and we achieve identical performance with both processors with 2080 Ti or 3080.

Moving on to Shadow of the Tomb Raider, we find a game that runs just as well, or even slightly better, on the Ryzen 9 in all conditions. Whether it's 1080p, 1440p, or 4K, the results are practically identical across the border.

The results of Rainbow Six Siege are interesting. At 1440p with the RTX 3080, the 3950X was only 2% slower than the 10900K, and we're seeing the exact same thing with the RTX 2080 Ti.

At 1080p the 3950X was 8% slower, and when we started pushing above 400 fps the limitations of the Ryzen processor can be seen. While we haven't tested at 1080p before, it's interesting to see a similar margin of 7% with the 2080 Ti. As we have seen in the previous titles, there is of course no difference between these two processors with the RTX 3080 when testing with 4K.

The results of F1 2020 are similar to those of Rainbow Six Siege. The 3950X is only a few percent slower than the 10900K at 1440p, so it's not enough to affect the overall results.

We see a more significant difference at 1080p, where the 3950X was 11% slower. At 4K, both CPUs can maximize the performance of the RTX 3080.

We tested a problem with Gears 5 for our RTX 3080 review, but that is now fixed. While 4K performance is very similar as expected between 3950X and 10900K, the Ryzen processor in Gears 5 has issues with 1440p and 1080p.

Here we see a 17% performance deficit on the RTX 3080 at 1440p, which is not ideal. That margin increases to 24% at 1080p.

The results from Horizon Zero Dawn are also interesting, as we are seeing a severe loss of performance on the 3950X at 1080p. Here the AMD processor on the RTX 3080 was 16% slower.

By the time we reach 1440p, that margin drops to just 5% and at 4K we don't see any difference between AMD and Intel processors. It's worth noting that the 5% performance loss at 1440p on the RTX 3080 was also seen on the 2080 Ti, so the gap between these two GPUs doesn't change when using an AMD processor as opposed to an Intel processor.

We see comparable performance across the board in Assassin's Creed Odyssey, as the 10900K only performs at 1-2 fps, which, like most, was a 3% difference.

The results of World War II speak for Intel. Here, the 3950X with the RTX 3080 was 6% slower at 1440p and the 2080 Ti was 5% slower, so we again noticed a similar loss of performance on both GPUs. At 4K we then speak of a narrower gap of 4%.

We have to admit we were really surprised at how weak the 3950X was in Metro Exodus, even at 1440p. In more recent benchmarks, we've updated where we're testing in-game, resulting in less CPU limitation on more regular train distances, and we assume this is an overall better benchmark for GPUs.

The RTX 2080 Ti used to achieve a maximum speed of 130 fps with the 10900K, now it can render almost 170 fps. This is an issue for the 3950X, which limits performance to around 150 fps.

We're now looking at a scenario where the 3950X is limiting the performance of the RTX 3080 by up to 21% at 1440p. Again, this is far from ideal, although we don't see any such performance limitation with 4K.

Resident Evil 3 doesn't see a performance difference between the 3950X and 10900K at 1440p and 4K with the RTX 3080. The same goes for 1080p with the 2080 Ti, although we see a 4% difference with the RTX 3080.

Doom Eternal is another game where we don't see any difference in performance with the RTX 3080 at 1440p and 4K using either the 3950X or the 10900K. However, the 3950X was 8% slower at 1080p.

Wolfenstein: Youngblood doesn't see a performance difference between 10900K and 3950X at 1440p and 4K, at least a difference worth talking about. The 10900K was 8 fps faster than the 3950X when the RTX 3080 was running at 1440p, but that remains a small 2% increase.

Hitman 2 is another title that has some differences between processors and doesn't run particularly well on the 3950X. Here the AMD CPU with the RTX 3080 was 10% slower at 1440p, although we basically don't see any difference at 4K.

Performance summary

After seeing the results for 14 games, we have to say that things were a little messy at times with the Ryzen 9 3950X. Of course, the GeForce RTX 3080 is an extremely fast GPU, and minor changes in performance at that level won't affect gameplay.

Some games were a match, in others Intel had a clear performance advantage, and for the most part we didn't see any real difference at 4K. Here is the average of 14 games.

The data you see here is used to evaluate GPU performance and calculate things like the cost per frame. Hence these numbers are very important.

We see that the 3950X was 5% slower than the 10900K at 1440p, which can affect the results. However, it was also 4% slower with the 2080 Ti. So if you compare these two products, you will come to the same conclusion regardless of the CPU used. The situation is similar with 1080p, where the 3950X was 8% slower with the RTX 3080, but also 7% slower with the 2080 Ti.

What we learned

Of the 14 games tested, half of us saw practically no performance difference at 1440p and 4K between the Ryzen 9 3950X and the Core i9-10900K. These include titles like Doom Eternal, F1 2020, Resident Evil 3, Rainbow Six Siege, Shadow of the Tomb Raider, Wolfenstein Youngblood, and Assassin's Creed Odyssey.

In Death Stranding, Horizon Zero Dawn, and World War Z, we saw single-digit margins at 1440p.

Bad titles for AMD include Gears 5, Hitman 2, Metro Exodus, and Microsoft Flight Simulator 2020, although none of these titles showed any difference at 4K.

If we put all that data together, the 3950X was 5% slower on average at 1440p, but it doesn't skew the data that much when compared to other GPUs like the 2080 Ti. We estimate it is closer to 1% to 2% as these slower GPUs still cause a small loss of performance.

There are a few things we could have done to reduce this margin, such as: B. the use of a faster memory. DDR4-3800 CL16 increases the performance by approx. 4%, and by manually setting the timings, the performance can be increased by a further 14%. At least that's what we've seen at 1080p with the RTX 2080 Ti. In retrospect, we might have done that, but we decided against it because setting the storage times on a test system can be complicated and can also cause stability problems.

We could have stuck with our Core i9-10900K test system, but after seeing the reaction from readers, it seems worth giving the 3950X a try. Most importantly, when we tested the RTX 3080 with both CPUs, we came to the same conclusion.

For example, TechPowerUp found that the RTX 3080 was 52% faster than the 2080 at 1440p based on a 23 game example, and they used a 9900K clocked at 5 GHz with DDR4-4000 memory . This is very much in line with our own data. We found that the RTX 3080 is on average 49% faster than the 2080. If you look at all the usual suspects, it seems pretty unanimous that the RTX 3080 is about 50% faster overall than the RTX 2080 at 1440p and around 70 % faster average at 4K.

Do we think it's a mistake to test with the Ryzen 9 3950X? In some ways, the margins aren't optimal in Gears 5, Hitman 2, Metro, and Flight Simulator 2020, and while that hasn't really changed the overall picture, we'd rather not be tied to the 1440p CPU in those cases.

When we got down to it, we were torn as to which path to go – honestly, now that we've tested the RTX 3080, we still are. Both options definitely have advantages and disadvantages. We will therefore seek the fact that we tested on the platform that most of you wanted to see. Looking to the future, we assume that we will switch to a Zen 3-based test system in a few months. Regardless of which path we have taken, it is inevitable that we will have to repeat it again relatively soon.

Purchasing links:
  • Nvidia GeForce RTX 3080 on Amazon
  • Intel Core i9-10900K on Amazon
  • AMD Ryzen 9 3950X on Amazon
  • AMD Ryzen 9 3900X on Amazon

Leave a Reply

Your email address will not be published. Required fields are marked *