After the mini-test we did on PlayerUnknown's battlefields in June, it was time to check after countless updates on where the game's performance was, not to mention all hardware releases along the way.
At that time, we compared the performance of graphics cards with different presets and found that the best combination for 60 fps / 1080p performance was the Ryzen 5 1400 and the GTX 1060. Since then, game updates have claimed to have fixed the game's poor optimization issues, including a patch about four months ago that improved CPU usage by allowing the game to use six or more cores.
In this case, we found it interesting to focus on CPU performance in this case. Today's tests include all 8th generation Intel Core CPUs, all Ryzen CPUs, and some of the 7th generation core series. Therefore we have test results for 16 different processors with 1080p on very low, medium and high quality presets.
The chips were paired with a GeForce GTX 1080 Ti using the 388.43 driver, and the CPU usage of all 16 CPUs was also considered for those interested. All unlocked Intel CPUs and all Ryzen CPUs have been tested with DDR4-3200 CL14 memory. In the meantime, the locked Intel CPUs with DDR4-2400 CL14 memory have been tested. For example, the Core i3-8350K was tested with 3200 memory, while the Core i3-8100 used 2400 memory.
For testing, we walked around the city of Pochinki for 30 seconds, which is more than enough time to collect the data we needed. Pass time has been reduced from the normal 60 seconds to just 30 seconds to minimize the number of times we were killed before completing a pass as this is a high loot area, so a high risk but a high reward for those who are looking for good equipment.
The test starts and ends at exactly the same time each time and the results are based on an average of three runs. Let's look at them …
First, we have the results with "very low" quality and the visual quality settings are set to the lowest value. Therefore, the GTX 1080 Ti as a performance-limiting component should be removed. However, we clearly see a GPU bottleneck in most 7th and 8th generation core processors.
The GTX 1080 Ti is only suitable for an average of 120 fps with drops to around 100 fps.
Previously, the 7700K and R5 1600 had the same performance when tested with the ultra quality preset. Here the 7700K is 20% faster than the 1600X because the Ryzen CPUs seem to be fighting in comparison. Of course, the Ryzen CPUs still delivered playable performance at well over 60 fps, but the results are disappointing in a game that claims to support high-core CPUs.
Amazingly, the Ryzen 7 1800X was only 14% faster than the Ryzen 3 1200 and only 6% faster than the 1300X at the average frame rate. This suggests that the game does not use the CPUs with a higher number of cores at all and instead prefers the core frequency to the number of cores.
It seems that a quad core is sufficient and doesn't necessarily require HT or SMT support. However, a dual core with HT is not enough for optimal performance, and we see this with the Pentium G4560, which was considerably slower than any other CPU tested. However, it was still playable and would be a good combination for a budget-conscious $ 100 graphics card.
Increasing the visual quality settings to the medium quality preset will reduce the performance of the GTX 1080 Ti with the fastest CPUs tested by about 10%.
The Ryzen CPUs are shuffling a bit, and now those with more cores are doing better, at least compared to what we've seen before. The 1800X is 25% faster than the R3 1200 and 16% faster than the 1300X. Therefore, the settings for the medium quality seem to put a higher load on the CPU, although this was not evident when monitoring the CPU load, since the total load was largely the same.
Most Intel CPUs can also find the limits of the GTX 1080 Ti here. Therefore, the models with 8 and 12 threads are likely to become faster again.
After all, we have the pre-set results of the highest quality and here we see only very small changes compared to the results of medium quality. For the most part, a few frames fail, although the Intel Quad Cores are the biggest losers here. The 1% low result for the Core i3-8100 and 8350K decreased by around 15%, while, for example, the Ryzen 5 1500X and the Ryzen 3 1300X were only 8% slower.
So far, the results seem to be everywhere, and we often see that in poorly optimized games. This graphic gives us a better overview of what's going on. If we look at the Core i7-8700K, we see a 10% drop from very low to medium and then only a 3% drop from medium to ultra.
The Core i5-7600K, on the other hand, drops by 10% from very low to medium and then by a further 12% from medium to ultra. So that's interesting: The ultra-quality preset certainly does more harm to the quad core, but then we have the Ryzen 3 1200 results, which more closely match those of the 8700K, to say the least, confusing. Then the 1600X shows a fairly consistent scaling across the three quality presets.
The Pentium G4560 is different again and shows similar results with the very low and medium presets, but then drops fairly sharply when using the Ultra preset, while the Core i5-7600K and Ryzen 5 1600X are the only CPUs that have a consistent scaling exhibit.
Even more interesting results. What you see here is the average CPU usage recorded from the 30 second pass. So it is not the peak value, but the average. For example, the G4560 temporarily reached 100%, but also fell to around 80%, although the average load for the entire test was 91%.
It is interesting here how much the CPUs of the Ryzen 5 and 7 series are underutilized. For example, the Ryzen 5 1500X has four cores and eight threads that clock up to 3.7 GHz depending on the load. The Core i7-8700K with six cores, 12 threads and a minimum operating frequency of 3.7 GHz, however, recorded a higher load, which was significantly higher.
AMD's own 6-core, 12-thread Ryzen 5 1600X had an average utilization of only 28%, which is significantly less than the 45% that was found when testing with the 8700K. You would expect a lower clocked CPU with the same number of cores to have a higher load that is not drastically lower. So there is clearly an optimization problem for the Ryzen CPUs.
It's been six months since we last tested PlayerUnknown's battlefields, and it's safe to say the game needs further tweaking. Aside from Shoddy Ryzen's support, even the combination of Core i7-8700K and GTX 1080 Ti was very overwhelming – an average of 123 fps at 1080p using the minimum quality settings is pathetic.
To put this result in context, the same combination transmits in medium settings over 220 fps in Battlefield 1, 200 fps in Warhammer II, 240 fps in F1 2017, 260 fps in Rainbow Six Siege, 220 fps in Call of Duty WWII, 250 fps in DiRT 4 and The list goes on. Again, all of these games ran on medium instead of the preset minimum quality.
In anticipation of the argument that PUBG is an open world shooter and therefore pounding the CPU, we saw that this was simply not the case when using the very low quality settings. Quad-core Intel chips and higher had a GPU bottleneck of only 120 fps, while the Ryzen CPUs were heavily underused.
One thing seems clear: if you are a big PUBG fan or just build a PC to play this title, the Core i3-8100 or 8350K offers you the best for your money. We would normally never recommend the 8350K, but PUBG makes it a valid choice.
We still have to test older CPUs, but there is a possibility that the Core i5-2500K will play the game without a problem, provided a slight overclocking is applied. It was surprising to find that when using a high-end GPU, there is little difference between the settings for very low and extremely high quality in terms of frame rate performance. Visually, however, there is a massive difference.