Like millions of people around the world, you are likely using your PC to play video games. You may have a lot of experience building and configuring computer systems because PC gamers often have knowledge and enthusiasm for hardware. So how about combining all three? Take the computer know-how, love of games and interest in components and mix them all together. It's a perfect recipe to dive into how a PC works and see how well it works.
In this article, we'll explain how to use games to benchmark your PC and how to analyze the results – either to check overall performance or to determine which part of your computer does most of the work. We'll cover the best way to collect the data and use the tools we work with to check the latest hardware. Time to test it!
It is more than just a game
When we looked at 20 programs for analyzing and benchmarking your hardware, some programs that specifically test the performance of 3D graphics offer detailed performance graphs and monitoring tools to log how the various parts work. But as good as these programs are, they are ultimately an artificial setup: we don't play benchmarks. We don't buy $ 2,000 machines just so that they can run test programs all day (most of us don't).
However, if you have a number of games installed on your system, you can easily use them instead. The latest titles often push the CPU and GPU to their absolute limits, giving your system just as good a workout as any benchmark program.
Quite a few titles also have their own integrated test mode. The latest titles from Ubisoft in their popular franchise companies can run a set test and then display the results in great detail. For example, in Assassins Creed Odyssey, the tool is so advanced that the results can be compared to a previous run, so you can easily see the impact of changes in hardware or game settings.
In the image above, we see 3 graphs: the frame rate, how long it took the CPU to process each frame for rendering, and the time it took the GPU to process one frame. There are also some basic statistics with the average (the arithmetic mean) and the displayed absolute maximum / minimum values.
For games with benchmark modes or those that can display frame rates while playing, the simplicity of these statistics is the first problem when using such functions in games. Absolute values can only occur for a single frame or this number can be hit several times at regular intervals. If the results can be displayed in a diagram, it is easy to see. However, if you only get the numbers, the numbers themselves are almost unusable.
Another potential problem with using an integrated test is that the workload on the CPU, GPU, etc. may not indicate what you may see during the actual game. There is no background processing of input variables, path finding, audio tracks, etc., and rendering may not have any explosions or other particle effects that a graphics card can often bring to the silicone knee.
However, if you just want your system to be checked quickly, these types of tests and the numbers they collect are good enough. On the other hand, if you want to test what the PC is doing when it is going through a real workload, it is better to log data in-game yourself – and you need to use the right tool for the job.
Choose your weapon!
There is a reasonable number of freely available programs that can record how quickly a 3D game is processed. For Windows systems, Afterburner and FRAPS from MSI are two of the most popular. The former is a comprehensive tool for adjusting the clock rates and voltages of graphics cards and works with AMD and Nvidia models. This information, along with temperatures and frame rates, can also be displayed in an overlay while playing (all of this data can be logged for later analysis).
FRAPS can also display and log the frame rate, capture video output and take screenshots. However, it has not been updated since 2014. It is best used for older titles that use Direct3D 10 or earlier, however Afterburner covers this. In addition to the latest games, it's probably only worth using if nothing else works.
We prefer to use a tool called OCAT (Open Capture and Analysis Tool): it was created by AMD as part of the GPUOpen project, an initiative that allows developers to access a number of free software tools and code samples. OCAT uses Intel's PresentMon, which is also open source.
Both tools are constantly evolving, so you occasionally run into one or two problems. However, they are very good for displaying and logging frame rates in Direct3D 11, 12 and Vulkan-based games. Nvidia also offers FrameView, which does the same thing but is not open source. Finally, there is CapFrameX, which also logs a large number of technical data. However, the best feature is the comprehensive toolkit for analyzing the data and displaying the statistics.
Like Afterburner and FRAPS, OCAT offers an overlay function that shows the frame rate and frame times.
During execution, the overlay shows which graphics API is being used and can show changes in frame rate on a small graph. This can be a little difficult to spot, so it's not worth using it.
Other tools like Afterburner offer much more details in their overlay systems (clock speeds, temperatures, etc.). So if you just want to monitor what's happening to your PC while playing games, OCAT doesn't give you very much. However, we use it because it collects a lot of important information and then processes and analyzes it for us.
To do this, set a hotkey for the capture process (make sure you select one that the game is not using) and, if necessary, choose where to save the data. It's a good idea to limit the acquisition time to a fixed value, but we'll look at that in a moment.
The results are saved as a CSV file (comma separated values) so you don't have to use OCAT to search the data. Microsoft Excel, Google Sheets, Apache OpenOffice etc. can process this format without any problems. Just a word of warning: the file contains a lot of data and can easily grind your PC if you try to go through everything.
Using it is very simple: first start OCAT, then start the game you want to test with and press the capture hotkey once you are in the game. If you have not set a time limit, you must press the hotkey again to stop recording.
Note that when you start and stop capturing, the performance of the game often drops temporarily. You can't do much about it, but it only affects the results for at least a handful of milliseconds.
Use or not use – that is the question
So that's the logging of the frame rates covered, but what's still worth recording? A question you often see in PC hardware forums is: "Does my CPU have a bottleneck on my graphics card?" or vice versa, and the responses regularly talk about monitoring CPU and GPU usage.
You can log such data with HWMonitor or Afterburner, but it is worth knowing exactly what the utilization rate in% actually refers to. Roughly speaking, this is a measure of how long threads in a kernel (sequences of instructions in a block of code) are currently being processed by the CPU / GPU, as a percentage of the time allotted for that thread (checked per second or otherwise).
Threads may be actively processed (& # 39; busy & # 39;), waiting for data from the cache / memory (& # 39; blocked & # 39;), or finished (& # 39; inactive & # 39;). Busy and blocked threads are classified in the same way. So high utilization does not necessarily mean that the processor is working very hard, and the% value does not indicate how much of the processor's functions are being used.
We can see this by running two different GPU benchmarks and logging everything with HWMonitor. The first test is with Geekbench 5, which goes through a series of computing routines, all of which run on the graphics card.
If we look at the Max column in HWMonitor, the GPU load has reached a maximum of 96% – but pay close attention to the performance value. This reached a maximum of 57.29%, which corresponds to about 140 W electrical power for this special graphics card. Also note that the GPU's temperature has only risen 9 degrees above the recorded minimum.
Compare that to a program that really puts a strain on the graphics card. We used OCCT, but you can also use a graphics benchmark or a game that's set to maximum levels of detail.
This test showed 100% utilization, just 4% more than in GeekBench 5, but the power and temperature values are much higher – the former is over 100W more than in the previous test. Together, they show that the GPU was worked much more thoroughly than in the previous benchmark, and this is not obvious when you just look at the value in use.
Logging performance can be a useful way to analyze component workload. The following image shows the CPU, system memory and GPU performance values while two sections of the Time Spy Extreme benchmark are running in 3DMark – the left one is the CPU test, the right one is graphics test 2.
We can clearly see that the GPU does relatively little work for the CPU test while the central processor is hit hard (as expected, but note that the RAM also works). In the test that goes beyond the limits of the GPU, the graphics card's power consumption is full – yes, the CPU is still quite high, but the GPU performance% tells us that it does a lot of work.
But does it do all the work? How can we be sure that the CPU has no influence on what a pure graphics test should be? The simple answer is: we cannot say it, not only on the basis of occupancy and / or performance figures.
While logging this information is not wrong, there are better ways to determine how much the different components are used. And as we look at what data is worth collecting, let's take a look at what OCAT is doing under its hood.
What is actually logged to maintain the frame rate?
To understand what data OCAT collects, you need to know a little about the rendering process in a 3D game. If you are interested you can read a short introduction to this topic. However, Nvidia gives us an overview of this diagram from the documentation in FrameView:
For each frame created by the graphics card, the game engine calculates all the data and instructions required for the GPU. A specific statement in Direct3D is called Present. This instructs the processor to display the frame once rendering is complete.
Any game or program that displays the frame rate measures the time interval between successive present instructions. Various things take place in this gap:
- The graphics API (Direct3D, OpenGL, Vulkan, etc.) converts the general instructions of the game engine into more detailed, more complex algorithms
- The graphics card driver then converts them into the various codes for the GPU
- Next, the GPU works through all of the code and marks the finished frame for display
- The output part of the GPU then sends the image to the monitor when the frame is drawn the next time the screen is updated
- Meanwhile, the game engine has already started or even finished preparing the next frame
The time between Present () statements is therefore not a measure of how quickly the GPU renders a frame, or at least not directly. However, since this processing almost always takes much longer than anything else, it is a fairly accurate estimate.
PresentMon, OCAT and FrameView measure many different time intervals several times per second and save them in the CSV file. We'll look at this when we analyze some results later, but these are the main recorded times:
|CSV column heading||What is the time interval|
|MsInPresentAPI||The number of milliseconds that the code spent on the Present () statement|
|MsUntilRenderComplete||The gap in milliseconds from the issuance of the Present () command to the completion of the rendering of the frame by the GPU|
|MsUntilDisplayed||The number of milliseconds from when the Present () statement was issued to when the frame was displayed|
|MsBetweenPresents||How many milliseconds were there between the last edition of Present () and the current one|
|MsBetweenDisplayChange||The time span between the last displayed and the currently displayed image in milliseconds|
When we display frame rates, we used the MsBetweenPresents data in hardware checks. By default, OCAT uses this automatically and it is the same number that other logging tools and games use when viewing frame rates.
However, note how these are all times: How does this turn into a frame rate (fps = frames per second)? The calculation is simple, since 1000 milliseconds are in 1 second, divide 1000 by the current time.
For example, if the value for MsBetweenPresents is a constant 50 milliseconds, the displayed frame rate is 1000/50 = 20 fps. So if you are aiming for 60 fps or 144 fps, the time interval must be 17 or 7 milliseconds.
Take the scientific approach to benchmarking
While benchmarking your PC is not exactly the same as performing particle physics research at CERN, we can still apply some basic aspects of the scientific method for data acquisition and analysis. The first part is to minimize the number of variables that can change and affect the test result.
In an ideal world, you want the computer to do nothing but run the game and logging software. We do this when we test the latest hardware because our test machines are only used for benchmarking. This may not be possible for most home computers, but there are a few things you can do to help:
- Leave only the minimum number of programs for testing and monitoring open
- Quit non-essential background programs like chat software or cloud services like Discord and OneDrive
- Interrupt or set anti-virus programs to block all network traffic
- Configure software updates or backup systems to perform their actions outside of the benchmarking period
Next, you need to make sure that the test environment matches that of the normal game. This may seem like we contradict each other considering what we just said about reducing variables, but we're referring to the fact that modern hardware's clock speeds and operating voltages depend on the temperature at which they run , actively changes.
Graphics cards are particularly susceptible to this because the GPU can get very hot when working. As soon as they reach their predetermined heat limits, the hardware starts to lower the clocks to ensure that the chip does not overheat. We can see this clearly in the picture above: when the temperature of the chip has increased, the clock rate has decreased to keep the heat levels under control.
Of course, this means that the performance also decreases, but by preheating all key components in the PC (CPU, RAM, GPU), the clocks should be somewhat more consistent. The easiest way to do this is to run the game under test for at least 5 minutes before you start logging data.
Also note that the test results always vary despite all of the precautions listed above. It could be due to the game you're using, or a normally idle background process that pops up to say hello. For this reason, it is important to collect multiple records. Carry out several test runs, at least three, so that an average can be calculated.
This is something we always do in our hardware tests. Doing more than 3 is better, but doing something like 10 is unlikely to be of any use. This is because the deviations themselves are usually quite small, provided the test environment is controlled, and once you have several thousand data, the odd rogue score won't have much of an impact on the statistics.
The last thing to consider is how much data needs to be collected for each test run. The time allotted for logging must be long enough to be representative of what is normally going on, but not so long as to waste valuable testing time. To demonstrate this, we took 3 examples of frame rates with Ubisoft's Assassin & # 39; s Creed Syndicate.
We chose a place in the game where we can easily repeat the test and set OCAT to collect data for 10, 100 and 1,000 seconds. We'll show you how we got the following numbers and what they all mean, but here are the results:
|Length of data collection||10 seconds||100 seconds||1000 seconds|
|Medium frame rate (fps)||59.8||60.0||60.0|
|1% low (fps)||33.8||54.0||54.0|
|99% high (fps)||120.4||66.9||66.5|
|Image time standard deviation (ms)||2.97||0.62||0.58|
We can see there is practically no difference in average frame rates, but the 10-second run appears to offer a much wider range in rates (from 33.8 to 120.4 fps). The same variation was probably in the other runs, but because they contain so much more data, the statistical impact is greatly reduced. You get that anyway when you play games – who only plays 10 seconds?
However, keep in mind that the 100 and 1,000 second numbers are almost copies of each other. For this particular test, collecting data for more than 16 minutes gave statistics that are no different from those of the ten times shorter run.
Brought to you by the letter S and the number 1
We have mentioned the word statistics a few times now, so we need to get some data to analyze it with math.
Let’s say we’ve already selected a game to test and collected all the data we need. We can now use OCAT to analyze the results for us. To do this, go to the "Visualize" tab:
Simply click where "Select acquisition file to be visualized", select the desired CSV file and then click the "Visualize" button. We did a quick test in Assassin's Creed Odyssey, recording the different frame times using the game's built-in benchmark. The test was set to a 4K resolution with the graphics settings set to Ultra High.
By default, OCAT shows the MsBetweenPresents numbers over the duration of the logging in the form of a smoothed diagram.
It may not look like it has been smoothed, but 2629 data points have been collected, and the graph would be much messier if they were all displayed. The shorter the frame times, the better the performance. So we can see the benchmark starts at around 18 milliseconds (which corresponds to 56 fps) before dropping to reasonably consistent 26 milliseconds (38 fps) for the rest of the test.
You can select more than one CSV capture file to be analyzed: First upload a CSV file and then click the "Select capture file" button again. This way you can easily compare frame times in different scenarios or games. For example, the following image shows measurements from Milestone's latest MotoGP 20 (green line) and Shadow of the Tomb Raider (orange line).
Both games ran at 4K resolution and each graphics option was set to the highest level, including the use of DLSS and Raytrace shadow for Tomb Raider. We can see that this game runs slowly from the frame times, but also notice how much the times jump around. Compare that to the MotoGP 20, where the frames last 14 milliseconds very consistently.
OCAT can not only record the results, but also do some basic statistical analysis for us. When you click the "Collect Statistics" button, various options are displayed. The two we're looking for are "Average FPS" and "99th percentile". OCAT calculates the average frames per second (FPS) by calculating the arithmetic mean of the MsBetweenPresents times. To do this, add all the times collected and divide the total by the number of data points collected.
The conversion to the frame rate corresponds to the one described above: Divide 1,000 by the times. In this case, the average of MsBetweenPresents was 24.88 milliseconds, which gives the following average frame rate:
The average FPS itself paints a very bad picture of the data. This is because the average (in this case the arithmetic mean) is just a statistic – especially a measure of something called the central trend. This is a value around which the sample of numbers tends to be grouped.
There are other average values such as the geometric mean, the median, the modal values etc. However, OCAT does not calculate them. So if you want to examine other central trend measures, you have to examine the data with other software.
The AMD program calculates the 99th percentile of frame times. Percentiles are values that provide information about the distribution of the numbers within the sample. And in the case of the 99th percentile, the value means that 99% of all frame times are lower than this time – only 1% of the frame times were higher than this.
In our Assassin's Creed test, the 99th percentile was 31.65 milliseconds. Remember that the longer the frame time, the slower the frame rate. So if we convert this to a fps value, we get the 1% percentile for the frame rates and this gives 1000 / 31.65 = 31.60 fps (coincidence, honestly!).
In our hardware test reports, we call this "1% low" and it shows you that 99% of the frame rate is higher than this number.
The average fps and 1% low are two quick stats that can give you a good look at what's going on behind the scenes of your computer. While the absolute minimum frame rate can be much less than 1% lower, it doesn't occur very often – only 1% of the time! If the frame rate fell more often, the number would be 1% lower.
But what if we want to work out more statistics or just want to do your own analysis? Well, CapFrameX can create a variety of statistics for you, or you can write your own program in Python or R to do this. You also have the option of using a spreadsheet program (such as Excel or Google Sheets). The following functions are required for these:
|function||What it calculates is the time interval|
|= min (array)||Finds the absolute minimum in the data array (the lowest number)|
|= max (array)||Finds the absolute maximum in the data array (the highest number)|
|= Average (array)||Calculates the arithmetic mean of the selected values (central trend measure)|
|= geomean (array)||Calculates the geometric mean of the selected values (central trend measure)|
|= Median (array)||Calculates the median of the selected values - which value lies exactly in the middle of the numbers if it is rated the lowest to the highest (central trend measure)|
|= percentile.exc (array, k)||Calculates the kth percentile of the selected array (measure of distribution)|
|= stdev.s (array)||Calculates the standard deviation of the array as a sample of the population (dispersion measure)|
The geometric mean and median only provide a different view of the average of the frame times – the former is best used when there is a large time difference, and the latter is good when the times tend to fall into multiple groups. For most people, good old computing means do the job.
We've already talked about percentiles, but we use the exclusive version instead of the inclusive version to ignore the very first and last data points. The capture process can often cause them to be lower than they should be, as the game pauses for a split second as the system activates logging and then saves the recorded data.
Another useful statistic is the standard deviation. This value gives you a good idea of how consistent the frame times were, since it is a measure of the average gap between the individual times and the overall mean. The larger this value, the greater the variation in the frame rate. So for smooth gaming, you want it to be as small as possible.
However, you don't need all of this mathematical information to study PC ruminants during a game – the average frame rate and low 1% are good statistics to work with. It's about how you use the results!
Know your enemy!
Time to put all this information and knowledge into practice. The results of benchmarking your PC can tell you which part of the computer affects the frame rates of the game the most. Our test guinea pig had an Intel Core i7-9700K, 16 GB DDR4-3000 and GeForce RTX 2080 Super as its main components – so it's quite powerful, although there are faster / more powerful parts.
To demonstrate a detailed analysis process, we used Assassins Creed Odyssey again to see how our own benchmark tool is handled on the above system. We want to make a judgment about what kind of test it is: does it push the CPU hard or is it all about the GPU? We will also compare these results to numbers collected while playing the game directly to get an idea of how representative the benchmark tool is for actual performance.
A total of 5 runs were recorded with a resolution of the game of 1080p (1920 x 1080 pixels) and the graphic quality at Ultra High. Using a spreadsheet package instead of OCAT, the frame times were averaged (with some other calculated statistics), then converted to frame rates and finally displayed in a scatter diagram.
Now it looks like the frame rate is jumping all over the place, and the test must have been a stuttering mess. However, these rapid changes are separated by milliseconds, and it is far too fast to be observed. The overall impression of the test was that it seemed pretty slick.
The average frame rate is pretty good at just under 75 fps, but the gap between 1% low, 99% high and the average is quite large at 22 fps and 40 fps, respectively. This strongly suggests that the workload is sometimes very high and there is a component in the PC that has to struggle with it: is it the CPU or the GPU?
One way to examine this is to repeat the tests with different resolutions and graphics settings. Changing the former only really affects the GPU, while changing the latter affects the CPU and GPU (although this depends a lot on the game and the effects are carried out at different quality levels). We selected 5 resolutions and used the lowest / highest possible level of detail.
We only show the average frame rates here, since the 1% low values followed very similar patterns. At first glance, this data doesn't seem to say anything that we don't yet know: if more pixels are shaded or additional rendering effects are used, this leads to a lower frame rate.
Das Ändern der Auflösung führt jedoch zu einer linearen Änderung der Bildrate (wie durch die geraden Trendlinien angezeigt). Um die Bedeutung zu erkennen, vergleichen wir dies mit den Ergebnissen, die wir bei einem der Grafiktests im Fire Strike Extreme-Benchmark von 3DMark erhalten haben:
Dieser Test weist stark gekrümmte Trendlinien auf, was zeigt, dass eine Änderung der Auflösung einen massiven Einfluss auf die Leistung hat. In solchen Situationen ist das Spiel / der Test pixelgebunden – d. H. Die Grafikkarte kann die Shader-Berechnungen problemlos verarbeiten. Mit zunehmender Pixelanzahl wird die Bildrate jedoch durch die Pixelausgaberate und die Speicherbandbreite der GPU begrenzt.
Stellen Sie sich das wie eine Fertigungslinie vor, die eine einfache Komponente zu einem festen Preis produziert. Wenn der Fertigungsauftrag 100 Artikel benötigt, wird die Linie diese schnell erledigen. Wenn der Auftrag jedoch einige Millionen umfasst, dauert es viel länger, bis alles erledigt ist – auch wenn jeder Artikel nicht lange dauert .
Die geraden Linien in den Assassin's Creed-Läufen zeigen an, dass dieser Test entweder rechengebunden oder bandbreitenbeschränkt ist. Mit anderen Worten, es gibt so viele lange komplexe Berechnungen für die CPU oder GPU, dass die zusätzlichen Pixel keinen großen Unterschied für die Arbeitslast machen oder dass so viele Daten verschoben werden müssen, dass die Speicherbandbreite des Systems nicht bewältigt werden kann.
Zurück zur Werksanalogie: In einem rechnergebundenen Szenario wird die Herstellung des Teils nicht von der Größe des Fertigungsauftrags, sondern von dessen Komplexität beeinflusst. Ein Szenario mit eingeschränkter Bandbreite wäre, dass eine Fabrik ständig auf die Lieferung von Rohstoffen warten muss, bevor sie in Betrieb genommen werden kann.
Wir können herausfinden, in welcher Situation wir uns befinden, indem wir eine Variable im PC ändern: die Kerntaktrate der GPU. Mit dem Afterburner von MSI haben wir die Geschwindigkeit der Grafikkarte auf eine feste Rate festgelegt und mehrere Tests über einen weiten Bereich von Taktwerten durchgeführt.
Wir haben 1080p für diese Untersuchung ausgewählt, einfach weil es die Auflösung in der Mitte der fünf war, die wir zuvor überprüft hatten. Wenn wir zuerst die Ultra High-Trendlinie betrachten, können wir sehen, dass eine Verdoppelung des GPU-Takts fast zu einer Verdoppelung der durchschnittlichen Bildrate führt (das 1% -Niedrige war dem ziemlich ähnlich).
Das ist kein so großer Sprung wie bei den 3DMark-Auflösungsprüfungen, aber es reicht aus, um darauf hinzuweisen, dass der Benchmark des Spiels bei diesen Einstellungen rechnergebunden ist. Wenn Sie das Qualitätsniveau auf Niedrig senken, erhalten Sie das gleiche Muster, aber die Tatsache, dass es gekrümmter ist und sich nur bei etwa 1.900 MHz abflacht, ist ein weiterer Beweis dafür, dass die Arbeitslast des Benchmarks stark auf die GPU geladen ist.
Zu diesem Zeitpunkt hätten wir weitere Tests mit der Grafikkarte durchführen, ihre Speichertakte ändern oder dasselbe mit dem Systemspeicher tun können, aber mit den Beweisen, dass es sich um ein Rechenproblem und nicht um ein Datenproblem handelte, wandten wir uns der Suche zu bei der Bestätigung genau, wo in der Verarbeitungskette die Ladung war.
Um dies zu erreichen, müssen Sie die Taktraten der CPU ändern. Dazu haben wir das Intel Extreme Tuning Utility verwendet, um alle Kerne im Prozessor dazu zu zwingen, mit derselben konstanten Rate zu arbeiten. Wenn Ihr System dies nicht kann, ist dies leider eine Prüfung, die Ihnen nicht zur Verfügung steht
Bei 1080p und Ultra High-Einstellungen machte das Ändern der CPU-Geschwindigkeit über einen Bereich von 1,4 GHz kaum einen Unterschied. Dies zeigt uns deutlich, dass unser Test-PC tatsächlich rechengebunden war und dass sich diese Straßensperre vollständig auf der GPU befand.
Tatsächlich mussten wir bis auf 720p mit den niedrigsten Grafikdetails gehen, um eine signifikante Änderung der Bildrate mit der CPU-Taktrate zu sehen.
Die Tatsache, dass die Trendlinie bei etwa 5 GHz abflacht, genau im gleichen Bereich wie die Standardgeschwindigkeit der CPU, zeigt, dass der in Assassins Creed Odyssey integrierte Benchmark in hohem Maße ein Grafikkartentest ist. Why? Because the CPU's performance only impacts on the test outcome when the GPU is given the least amount of work possible.
So that's the test analysis done and we have enough information to be confidence in saying that the game's own benchmark pretty much just tests the graphics card, no matter what settings are used. But how does all of this compare to what happens when you're actually in the game? Let's repeat the resolution tests again:
To start with, the frame rates themselves are lower in the game, than we found in the benchmark, but notice how different the trend lines are? At the lowest graphics settings, the performance is essentially the same at 1080p, as it was at 720p — the line is pretty flat between these resolutions. Only with more pixels than this, do we see the fps decrease. The 1% Low results also followed this trend, just as we found in the benchmarking tool.
This tells us that the GPU easily copes with the work, so the performance of the game is being determined by the capabilities of the CPU. Switching to Ultra settings reverses this pattern, and we see a curved trend line, just like saw in the 3DMark test. It doesn't dip as much as in Fire Strike Extreme, but it's enough to indicate that, with these graphics levels, the game is somewhere between being compute and pixel bound.
We re-examined the effects of GPU and CPU clock speeds at Ultra and Low settings and essentially found the same patterns as before — all of this strongly suggests that Assassin's Creed's benchmark is definitely a graphics card test, but it is a reasonably good indicator of what to expect in the game itself.
Big caveat time, though — this is true for this PC, running this particular test, only. It cannot be stressed enough that with less or more capable hardware, the results would be different. A stronger GPU would cope with the compute load better, meaning the CPU would have more sway in the average frame rates, whereas a weaker GPU would fully control the performance.
But whatever system or game is used and checked, the test routine we've just gone through can be applied to any situation, be it in-game or in-benchmark. Let's summarize the overall process, so it's easier to follow and repeat with your own PC:
- Set the game's graphics details and resolution to the highest the PC will support
- Capture several sets of data and average the results
- Repeat the tests a few times, lowering the resolution for each set
- Plot the findings in a graph: straight lines indicate the game is compute bound (CPU and/or GPU), curves suggest pixel bound (GPU only)
- Pick a resolution and retest, but change the GPU clock speeds
- Plot these results: if the trend is constantly up, then the GPU is limiting the frame rate; if the trend line flattens off, then it's the CPU
- Repeat once more with CPU clock changes (if possible) to confirm the above
This is clearly a lot of work, and this is why testing hardware for reviews takes so much time and effort! However, you can do a simplified version, where the only thing you change is the GPU clocks — set up the game to how you normally have it, and use a tool to drop the speed of the graphics card in stages. Large changes in the average or 1% low frame rates will indicate that it's the GPU that's the limiting factor in the game's performance; if slicing off, say, 25% of the graphics card's speed doesn't make much difference, then it'll be the CPU that's controlling matters.
Windows will now shutdown
If PCs were like consoles, none of what we've been going through would be worth doing (or possible, for that matter). This is because the range of different hardware and software configurations out there is very small. Game developers have a far easier job of ensuring their projects work properly on the Xbox, PlayStation, or Switch than with Windows-based computers.
And it's not hard to see why, when you look at all the different CPUs and GPU models that can be purchased — for example, Nvidia offers nearly 60 products that use their Turing processors and AMD has over 50 CPUs sporting the Zen architecture. Not every combination of the two would be used for gaming, but the count still runs into the thousands, and that's without throwing other processor models, RAM, motherboards, storage, operating systems, and drivers into the mix.
It might seem like it's nothing short of a miracle that developers manage to get their games to work at all on PCs, but they do it by generalizing their approach to how their code is going to run and what hardware support is required. This means there's always some room for improving a game's performance, but it also means that there's a good chance that a particular title might not run well on a specific setup.
This is why using games to benchmark hardware and sharing the results with the world can be so useful — no game developer has direct access to all of the possible hardware configurations, but through our and your hard work, they can collate the information and use it to continually improve their work. Well, theoretically they can!
Of course, in-depth benchmarking and data analysis isn't everyone's favorite flavor of ice cream; it can often be tedious to do and it rather misses the whole point of having PC games in the first place (i.e. play them!). But we hope this article has given you some insight as to how we test hardware and how you can do the same. And if you're wondering about what part of your PC to next upgrade, this is a great way of getting the numbers to help you make that decision.
If you have your own method of testing hardware or know about some neat logging tricks, share them with everyone in the Comments section below.
Tests done, stats calculated, data analyzed — time to shutdown Windows!
Download: 20 Programs to Analyze and Benchmark Your Hardware
Don't Miss: How We Test: CPU Gaming Benchmarks — or: How I Learned to Stop Worrying and Benchmark Using a High-end GPU
- AMD Ryzen 9 3900X at Amazon
- AMD Ryzen 9 3950X at Amazon
- AMD Ryzen 7 3700X on Amazon
- AMD Ryzen 5 3600X on Amazon
- Sabrent 1TB Rocket NVMe PCIe 4.0 SSD on Amazon
- Asus ROG Strix GeForce RTX 2080 Ti on Amazon
- GeForce RTX 2070 Super on Amazon