Welcome back to harbor unboxed. Now last month i looked at amd’s progress over the past few years with their ryzen 5 series by benchmarking, four generations in a range of games using four different gpus two resolutions and two quality presets. It was a tremendous amount of work, but really well worth it, as the results were very interesting. In fact, they ended up being far more interesting than i anticipated in that content i tested with the geforce rtx 3090 rtx 3070 radeon, rx, 5700 xt and 5600 xt. Now that might seem like a bit of an unusual range of gpus, but the point was to represent four different performance tiers, allowing us to look at gpu and cpu scaling performance. So the brand of gpu used didn’t really matter, at least within reason. But i say within reason, because it did reveal a few interesting performance, related differences between the g force and radeon gpus that can be attributed to the difference in architectural design that relates to how the software driver interacts with the hardware. The most simplified explanation i can give for what we’re seeing here is. This nvidia’s driver has a high core dependency for multi threading workloads, the kind of workloads we see in directx, 12 and vulcan titles, for example. Essentially, it requires more cpu cycles to function when compared to amd’s radeon driver and this results in higher cpu utilization, which becomes a problem for cpu limited gaming. This issue is also present when using directx 11, though i’m, not sure if the cpu load is the same.

Here, worse or better – and we won’t be looking at that today – it’s been a huge amount of work just to gather this directx 12 data in watchdog’s legion and horizon zero dawn. We’Re talking about almost 1 000 benchmark runs for this investigation now backing up a little here’s an example of what i found in the previous ryzen 5 scaling benchmark. If you pay attention to the 1600x and 2600x you’ll see that we hit a strong cpu bottleneck with the rtx 3090 limiting performance of the 1600x to 85fps and just over 90fps for the 2600x. Now we know this is a strong cpu bottleneck as the rtx 3070. Should be around 25 percent slower than the rtx 3090, when gpu limited so typically, the takeaway here would be that the ryzen 5 1600x is good for a maximum of 85 fps on average and watchdog’s legion, using the medium quality preset and that’s, regardless of gpu power. Even if a much faster gpu existed, we’d still see the same 85 fps limitation. This is the conclusion all reviewers and testers would come to as it is technically correct, at least when using a geforce gpu. What i found quite perplexing was the fact the 1600x and 2600x were able to push past this limitation when paired with a radeon gpu. In fact, both were faster using even the radeon rx 5600 xt 18, faster in the case of the 1600x. That means we’re. Looking at a situation where these ryzen 5 processors actually rendered more frames with the 5600 xt than they could with an rtx 3090 now i realize that doesn’t seem possible and believe me.

I did triple check this data before publishing it, but i assure you it’s correct, but before we get too far into this, i should quickly note that this obviously isn’t always the case. The rtx 3090 is quite clearly a much faster gpu than the 5600 xt by increasing the resolution of 1440p with the medium quality preset, we found comparable performance for the 2600x using the 5700 xt rtx 3070 and rtx 3090. While there was a performance increase for the 1600x, when moving from the 5600xt to the 3090, though it was only 8, we were still seeing a situation where a radeon gpu did enable the highest level of performance with the 1600x. Here the 5700 xt edged out the rtx 3090, as the performance was still slightly cpu limited. Finally, by increasing the resolution again, this time to 4k, the strange scaling behavior was eliminated and the results were as you’d expect. Now the strange scaling with the older ryzen 5 parts wasn’t just seen in watchdog’s legion. Rather, it was observed in all games when the test conditions were primarily cpu limited. That said, it was mostly seen with the 1600x and 2600x with the 3600x exhibiting similar performance. At times the 5600x, on the other hand, that was fast enough to avoid running into a cpu bottleneck with the 5700 xt. So we never saw the issue with the newer faster z3 part, though it would still occur in a cpu limited situation at the time it wasn’t clear if this was an issue with the ryzen processors, when using a geforce gpu or if it was something that intel Cpus would also suffer from basically more benchmarking was required, so without hesitation i got to it.

I felt in order to really get a good look at what’s, going on here. It’D be best to include data with the radeon rx1600 xt rx 6800 and even the rtx 2080 ti to see if this is an issue with perhaps the ampere architecture or just a general nvidia driver overhead problem. Lastly, i also included results with a low end, intel cpu that’s likely to create a system bottleneck, as this would confirm if the problem is exclusive to ryzen or not again, given how much work it is to test all of these configurations, we’re only going to look At two games, but given what we already know from the previous video two should be more than enough having completed the testing. The next challenge was to present all the data in a clear and straightforward manner, and for this i’ll be going about things a little bit differently to normal i’ll, be breaking up the data and we’ll start with a single example using watchdog’s legion at 1080p, with the Medium quality settings, so this is one of the more cpu bound tests that we have let’s start with the 1600x and 2600x data using the 600xt rtx, 3090, 5700 xt and rtx 3070. What are some of the takeaways based on this data? Firstly, you’d be led to believe that the 5700 xt is around 20 faster than the rtx 3070, and i assure you that is not normally the case under gpu limited test conditions.

The geforce gpu is typically around 30 to 40. Faster, more confusing is the fact that here the rtx 3090 is no faster than the rtx 3070, and, typically that in itself wouldn’t be very confusing. This is what we call a cpu bottleneck, but it becomes very confusing when we find that the performance is boosted by a further 18 when using the 5700 xt a gpu. That should be much slower than the rtx 3090. Then, if we compare the 5700 xt to the 6900 xt, we again find what looks like a strong cpu bottleneck with very little in the way of a performance uplift when going from the 5700 xt to the 6900 xt. Basically, both the radeon and geforce gpus. In this example, are cpu limited, the confusing part is that the limits for the radeon gpus are about 18 higher than that of the geforce gpus. Now, if we add in the ryzen 5 5600x, we find that, with this much faster cpu, we’re, now no longer cpu limited when using the rtx, 3070 and 5700 xt and as a result, we do see a small performance uplift for the geforce gpu, whereas the rtx 3070 was 17 slower than the 5700 xt when using either the 1600x or 2600x it’s actually 5 faster. When using the 5600x. The margin should no doubt be larger in favor of the rtx 3070, but we’re still running into some driver overhead issues here, and this is more noticeable when upgrading to the rtx 3090.

We see that the 5600x only allowed for a four percent performance increase when upgrading from the 3070 to the 3090, and you probably expect to see a much greater margin. However, with the 6900xt we’re looking at a 24 performance increase from the rtx 3070, and this also suggests an overhead issue with the geforce gpus now by adding in the core i3 10 100, we can see that this isn’t a ryzen cpu related issue rather it’s a Cpu headroom issue caused by greater overhead with the nvidia drivers. Also, please note for discussing these results. I’Ve swapped the 5700 xt and rtx 3090 positioning around this makes it a bit easier to observe what’s going on here, and it really is interesting to see the lower end. Processors, limited by the geforce driver, it’s, also interesting to note that the 5600x appears to be out of sync with the 5700 xt falling behind the geforce configurations only to end up well in front with the 6900xt. Before we continue, i should clarify that this isn’t, an nvidia ampere architecture, issue testing with the turing based rtx 2082i revealed the same driver overhead issues when cpu limited – and this was seen in watchdogs, legion and horizon zero dawn at 1080p, using the medium quality preset. So again, this is very much down to the way the nvidia driver works. Now again, in total, i have tested seven gpus with five cpus and i’ll make that data available to float plane and patron members in one big massive graph, but for the purpose of this video i’ll limit the number of data points i show otherwise.

This is quickly going to become a complex and difficult mess to follow so i’ve removed the rx 6800 and rtx 2080 ti results, while also dropping the ryzen 5 3600 x, starting with the 1080p medium quality data from horizon zero dawn. We’Ve got a lot to go over. Firstly, all four cpus were able to max out the radeon rx 5600 xt under these test conditions, so 106 fps on average. Here we do see some variation. The results with the 5700 xt but overall performance is fairly similar. Then, when upgrading to the rtx 3070, we find the data goes a bit all over the place and it’s only the 5600x that moves forward. We’Re. Looking at an 11 performance increase for the 5600 x, but an 11 decrease for the 2600 x, a 9 decrease for the 10 100 and a massive 17 decrease for the 1600 x. The slowest of the cpus tested, increasing the geforce gpu horsepower further with the rtx ‘0, does help the core i3 10 100, improve upon the performance seen with the 5700 xt, though we’re only talking about a mere five percent increase. Meanwhile, the 2600 x only saw a two percent increase, while the 1600x still faced a reduction in performance, this time becoming eight percent slower again, that is pretty crazy stuff, given that the 1600x was 17 faster when using the 6900 xt when compared to the 5700 xt. More incredible was the fact that the 1600x was 27 faster when paired with the 1600 xt, as opposed to the rtx 3090 that’s crazy.

Given that the 6900 xt was just three percent faster than the rtx 3090, when paired with the 5600 x, the core i3 10. 100 was also 21 percent faster when paired with the 1600 xt opposed to the rtx 3090, and essentially, what this means is. You could be looking at between 20 to 30 percent, better performance with a radeon gpu in cpu bound scenarios. That said, it is important to remember that we are only talking about cpu limited performance here. If we crank up the visual quality settings to the max which drastically increases the gpu load, we find that the results are more in line with what you typically expect to see. There are still some oddities seen with the ryzen 5 1600x we’re moving from the 5700 xt to the rtx 3070. But outside of that, scaling is as expected, increasing the resolution of 1440p, but with the medium quality presets sees the game, become even more gpu limited when compared to what we saw when using the 1080p ultra settings. Therefore, the results are even more typical, with the faster gpu resulting in greater performance, pretty basic stuff. That said, we do still see some evidence of lower cpu overhead with the radeon gpus. The 1600 xt, for example, saw the 1600x boost its performance from the rtx 3090 by a whopping 27 percent, while the slightly faster 2600x only saw a 19 increase. Finally, the 1440p ultra data is exactly what you’d expect to see here.

We have a hard gpu bottleneck with the 5600x, while we were mostly gpu limited with the 5700 xt. We still see some separation with the rtx 3070 and then rather significant performance differences with the rtx 3090. However, we again find a situation where the radio rx 1600xt allows for more consistent performance with the lower end cpus. Previously, you would have assumed that the 1600x was good for a maximum of 98 fps on average. As seen when testing with the rtx 3090. However, we’re again looking at around a 20 boost with the radeon gpu here’s, another look at the watchdogs. Legion 1080p, medium data, again it’s very clear here when cpu limited, you face a greater performance penalty with the geforce gpu, as the 5600 xt should under no conditions beat an rtx 3070, let alone a 3090, at least when using these same visual quality settings, but that’s Exactly what we’re looking at here, the ryzen 5 1600x 2600x and core i3 10 100 – were all faster using the 5600 xt up to 18, faster in the case of the 1600 x. Again, you really wouldn’t think that possible, given the geforce gpu was over 30 faster when paired with the 5600 x, using the ultra quality preset at 1080p. Doesn’T entirely solve the performance issue same with the 1600x it’s, still slightly slower when paired with the rtx, 3070 and 3090, when compared to what we see with the 5700 xt and again we’re still looking at a 20 performance boost when moving from the rtx 3090 to The 6900 xt the 1600x also goes backwards when paired with the geforce gpu at 1440p.

With the medium quality settings, we saw 89 fps with the 5700 xt and then just 80 fps with the rtx 3070 and 84 fps with the rtx 3090. While it was good for 100 fps when using the 6900xt, then finally, at 1440p, with the ultra quality settings, we see that scaling is back to normal. For the most part, though, we are still seeing significantly better performance for the ryzen 5, 2600 x, 1600x and core i3 10 100, when paired with the 6900 xt relative to what we see with the 5600x okay. So i think that is enough. Looking at graphs, i do have quite a bit more data and i will make that available to harbor and box community members. But honestly, it doesn’t tell us anything that we haven’t already learnt with the data that we’ve just looked at it’s. Just really more of the same, what we can look at now, though, is cpu utilization using the core i3 10 100, and comparing it with the rtx 3070 and rx 6800 and i’ll be using shut off the tomb raider for this testing, as it is very cpu. Demanding – and it is also highly repetitive – and that makes it great for benchmark accuracy – a few important things to note, though, before we get to the side by side, comparison frame rate does heavily influence, cpu utilization, so without vsync enabled or some kind of fps cap, the Faster rx 6800 will see increased cpu usage, sometimes higher than that of the rtx 3070.

. However, if we limit the frame rate to say 60fps, both gpus should be providing the cpu with a similar workload, excluding stuff like driver overhead, for example. In this example, we’re, looking at between 10 and 20 percent, higher cpu utilization with the rtx 3070 when compared to the rx 6800, and that fits in pretty well with the performance numbers just shown. Please note that this 10 to 20 higher utilization will be much more difficult and perhaps impossible to spot with a much higher end. Cpu such as the core i9 10900k, for example, as we just saw, the increase utilization really only hurts performance in cpu limited scenarios, so in this case, with a lower end core i3 part. At this point, it should be quite clear: the geforce drivers do in fact require more cpu cycles to work, and this can see mid range to low end cpus become the primary system bottleneck sooner than they would when paired with a radeon gpu. Of course, this does depend on not just the hardware used, but also the quality settings if, for example, you’re pairing, something like a ryzen 5 2600x with a rtx 3070 and the goal is to game at 1440p using high quality visual settings, the increased cpu overhead shouldn’t Be too much of an issue there, as the games will be primarily gpu limited more often than not, however, if you’re a competitive, gamer or you’re, just after high frame rates, but don’t have the money for the latest and greatest cpu again using something like a 2600x, Then the radeon gpu has the potential to enable 20 to 30 greater performance, and that really is a significant increase.

It also means, if you have a cpu, such as the 2600x and you’re using say a radeon rx 5600 xt upgrading to the rtx 3070 might not net you any additional performance, especially in cpu bound or very near, to cpu bound scenarios. In fact, you could see a fairly substantial performance regression, and that means opting for an rx 6800. Instead can be the difference between dropping frames and gaining a bit of extra performance. Now i should back up a little bit here and point out that it’s always best to find a healthy balance between cpu and gpu performance and pairing, a ryzen 5 1600x with a geforce rtx 3090, for example, isn’t exactly a good idea. So for those of you with more powerful cpus, most of this won’t really apply at least not right. Now, as you become more cpu limited down the track, this could become an issue assuming that you don’t just upgrade your cpu at that point in time. Still, it is an interesting discovery to find that radeon gpus can allow you to go a little bit longer between cpu upgrades and it’s, also worth noting that, even when limited to 60fps on both gpus and that little side by side comparison, we looked at. I found that shadow of the tomb raider played noticeably smoother with the rx 6800, as the rtx 3070 often suffered from very noticeable frame stuttering. In short, if you have a cpu like the core, i3, 10 100 or ryzen 5 2600x – and there are plenty of models of roughly equivalent performance or slower, then upgrading to a radeon gpu will without question, ensure greater performance in the more cpu demanding games.

Of course, we highly recommend that you research, your next gpu upgrade and go with whichever option offers you the most bang for your buck. But if you don’t plan on upgrading your cpu anytime soon, the driver overhead thing is worth considering anyway. I found this a very interesting benchmark, investigation and i’m glad that we finally have some high end radio and gpus, as they really were crucial into looking into this properly. If you appreciate the time and effort that was invested into this content, you’d like to become a harbor and box community member, then make sure you check us out at either floatplane or patreon. You can sign up to either of those platforms and it will give you access to stuff like a private discord, server, a monthly live stream with two of myself, a q a’s behind the scenes, videos a lot of cool stuff there. So, as i said, if you’re interested float, playing, patreon links for those are in the video description, if not perfectly fine, i would like to thank you for watching this video i’m.

https://www.youtube.com/watch?v=JLEIJhunaW8