Intel 5th Generation vs. 10th Generation, but why? You'd be forgiven if you had no idea that Intel was even making a mainstream 5th generation desktop processor family.
Broadwell was a big deal when it launched Intel's now infamous 14nm process. It was mid-2015 and the chip maker's then-new and cutting-edge 14nm process was a big deal. We just didn't expect that 6 years later they would still rely on it with their 11th Gen Core series processors.
The 14nm process also had its problems and was badly delayed – just not "10nm" badly delayed – but it was, and as a result, Broadwell arrived much later than expected, forcing Intel to update Haswell a year after it was released , and it would be a full year before 14nm Broadwell would arrive.
But it would be short-lived. Incredibly, just two months after the Core i7-5775C and Core i5-5675C were released, both pieces were essentially discontinued when Intel moved on with Skylake-S and released a full range of 14nm Core i7, i5, i3, Pentium and even Celeron processors. Skylake almost switched to DDR4, while Broadwell parts were limited to DDR3 on the Z97 platform.
|Core i7 5775C||Core i7 6700K||Core i7-8700K||Core i7 10700K||Core i7 11700K|
|MSRP $ US||$ 366||$ 339||$ 359||$ 374||$ 399|
|Release date||June 2015||August 2015||October 2017||May 2020||March 2021|
|Cores / threads||4/8||6/12||8/16|
|Base frequency||3.30 GHz||4.00 GHz||3.70 GHz||3.80 GHz||3.60 GHz|
|Maximum turbo||3.70 GHz||4.20 GHz||4.70 GHz||5.10 GHz||5.00 GHz|
|L3 cache||6 MB||8 MB||12 MB||16 MB|
|power outlet||LGA 1150||LGA 1151||LGA 1200|
|TDP||65 watts||91 watts||95 watts||125 W|
So, in a way, Broadwell-DT was a failure, leaving behind the ultra-expensive Broadwell-E parts to advance the Intel HEDT agenda. But in my opinion, Broadwell-DT was a lot more interesting than Broadwell-E, although it was limited to just 4 cores while packing parts like the Core i7-6950X 10.
This is because Broadwell-DT contained what Intel called "Crystal Well", which was code for CPUs with eDRAM and as introduced with Haswell, if only in BGA form. In the case of the 5775C, we got the first socketed Crystal Well part and it got 128MB of eDRAM, which acted like an L4 cache. The intent was to give the iGPU much more memory bandwidth, which would improve the 3D graphics performance, but when it is not being used by the iGPU it could be used as an additional cache by the CPU.
We tested the Core i7-5775C in 2015 and found that it could sometimes beat AMD's best iGPU solutions in games by up to 60%, even though it cost at least three times as much. In fact, pricing for any desktop CPU with a built-in 128MB memory buffer has always been a big issue in 2015, and the 5775C was priced at $ 366, a price that Intel is likely to have a lot less margin compared to the $ 339 4,770,000 Dollars from previous years.
Given all of my recent reviews of Intel's 10th gen Core series where we compared various L3 cache capacities with everything else, e.g. Core i7-5775C on the results.
I hesitated at first because it's not exactly an apple-to-apple test, the 5775C uses slower DDR3 memory and it would be difficult to clock up to 4.5GHz. But since we had already tested the Core i3-10105F at 4.2 GHz, I thought if I could get the 5775C up to the same frequency it would be a really interesting comparison.
Like the Core i3 model, the 5775C packs a 6 MB L3 cache, but of course uses much older cores, but also has the 128 MB L4 cache, which is made available to the cores when the Radeon RX 6900 XT is used. So how good was the 5775C and can it withstand modern quad cores like the awkwardly named Core i3-10105F?
|Year of publication||Microarchitecture||Tick or Tock||Process node|
To find out, we ran a number of benchmarks, of course, but first a couple of additional test notes. First, note that the Core i3-10105F and Core i7-5775C ran at a slight clock penalty compared to the i5, i7, and i9 parts. This is because the highest stable frequency I could get with the 5775C was 4.2GHz and then 10105F is locked on a 4.2GHz all-core processor.
For the 10th generation parts, we used the Gigabyte Z590 Aorus Xtreme motherboard. I then clocked the three K-SKU CPUs at 4.5GHz with a 45x multiplier for the ring bus and used DDR4-3200 CL14 dual-rank dual-channel memory, with all primary, secondary and tertiary timings manual configured. The Core i3-10105F uses the same spec memory, but I couldn't adjust the clock frequency.
The Core i7-5775C was tested on the Asus Z97-Pro motherboard with the current BIOS and some DDR3-2400 CL11-13-13-31 RAM. We used an RX 6900 XT for testing as it is the fastest 1080p gaming GPU on the market. Let's look at the results.
Starting with Rainbow Six Siege, the two results we'd like to focus on here are those of the Core i3-10105F and the Core i7-5775C. The old 5th generation Core i7 with DDR3-2400 memory got the 6900 XT averaging 325 fps, which is an impressive result when you consider that the 10105F was only 10% faster and has the benefit of a much faster DDR4 3200 memory, along with numerous generations of core and architectural refinements.
This means that for the same core count and operating frequency, we are looking at an effective IPC improvement of only 10% for Intel over a 6 year period using Rainbow Six Siege as a benchmark. This is a shockingly small improvement as it includes an upgrade in storage technology.
Of course, Intel has added more cores and cache since then, but if we stick with the 4-core configurations we see that the 10600K is 22% faster, the 10700K 31% faster, and the 37% faster, and of course those margins are expanding continue as soon as you have activated all supported cores for these parts.
The data from Assassin's Creed Valhalla are not particularly useful as the parts of the 10 1% are 12% low. So it's difficult to make real performance claims based on this data, so let's move on.
The Battlefield V results are interesting as the 5775C wasn't much slower than the 10105F, especially when you look at the 1% low data. When comparing the average frame rate, the part was the 5th bandwidth. Something we'll look at a little later.
With the move to F1 2020, the 10th Gen Core i3 really doesn't offer much extra performance over the aging 5th Gen 5775C. We expect an 8% improvement in average frame rate with an 11% increase over the low performance of 1%. This is where Intel has seen the biggest performance gains over the years by simply adding more L3 cache, like the 10600K. It's only a 13% increase in performance, but that's significant since the 10105F and 10600K are based on the exact same CPU architecture.
Hitman has made very little progress for Intel over the past 6 years. The 10th generation Core i3 was only 5% faster than the 5775C and again it is the doubling of the L3 cache that led to the greatest performance improvement, as the 10600K had a 9% boost over the 10105F and an 11% boost. percent increase with 1% lower performance.
Next up we have Horizon Zero Dawn, and I suspect the memory bandwidth is very high here since the 10th Core i3 is one of the bigger performance gains we've seen for the 10th gen over the 5th gen.
We're also looking at an 11% increase in performance for the 10105F over the 5775C. Then if we take Intel's fastest quad-core, the 20MB 10900K with only 4 active cores, we see that it's 24% faster than the 5775C, so a decent increase in performance, but probably a lot less than what you're doing would expect to see it after more than half a decade.
Shadow of the Tomb Raider has always been grossly on quad cores, including those with 8 threads, and this is particularly in the village section that we use for testing. That said, with sufficiently powerful cores, the experience can be pretty good, as seen with the 4-core 10600K, 10700K, and 10900K configurations. Then if we look at the 10105F, we see a 16% drop in 1% lower performance versus the 10600K, while the 5775C is again another 12% slower.
So if we take a closer look at the comparison between the 5775C and 10105F, we see that Intel has improved performance by up to 14% over the past 6 years, if you look at the 1% low data. Not amazing, but it is certainly better than most of the other results we've seen so far.
The last game we'll look at is Watch Dogs Legion, and here the 10105F was 11% faster than the 5775C at the average frame rate and 17% faster considering the 1% low data. So once again no huge or even significant increase in performance for 6 generations of progress.
Before completing the tests, take a look at the cache bandwidth here. Please also note that all configurations of the 10th generation had a memory bandwidth of 38 GB / s. That's 38 GB / s for the DDR4-3200 memory, measured with the SiSoftware Sandra 2021 memory bandwidth test, so not AIDA64 which gives you higher readings.
I didn't bother with a memory bandwidth graph as the 11 configurations tested only gave two different results. Even for the parts of the 10th generation there is 38 GB, regardless of the model or number of cores. The Core i7-5775C, on the other hand, was limited to 30 GB / s with DDR3-2400 memory. This means that the newer 10th generation CPUs enjoyed 27% more memory bandwidth, which is a significant advantage.
Now when you look at cache bandwidth, there are a few interesting things to consider. Firstly, L1 and L2 are one cache per core, so the bandwidth number is effectively multiplied by the core number, so we get our best comparison with 4 active cores, and here all the 10th generation models are pretty much the same. What we can see is that Intel has managed to reduce the L1 cache bandwidth from the 5th.
Then when we compare the 5775C and 10105F, we see a 33 percent improvement in L3 cache bandwidth. So even though the capacity is the same, the bandwidth has been radically improved.
Finally, we see that the eDRAM L4 cache bandwidth, while useful, is not very large and, in fact, is not much larger than the 10th generation models, which access system memory with a bandwidth of only 39 GB, for example 4x slower than the 10105F's L3 cache. That probably explains why I found in 2015 that the 5775C was only about 10% faster than the 4770K when you compare cycle-by-cycle across a range of application benchmarks.
Obviously, Intel's progress has been slower than expected over the years as it got stuck on its 14nm process for seemingly infinite positives. In a way, it's impressive to see how much they got out of the 14nm process, while on the other hand you would normally expect significantly more progress 6 years later.
If you'd told me in 2015 that Intel's newest and best CPUs would be around 10% faster in games by 2021 with the same core count and clock speed, I probably wouldn't have believed you. Even more so when you consider that at the time I would be using a GeForce GTX 980 Ti for benchmarking, while today's 6900 XT is almost 3x faster.
Also, if you had told me that AMD Intel would dominate mainstream and high-end desktop platforms as well as the server market by 2021, I would have passed out laughing, so definitely don't listen to my long-term predictions. Not that I do them, but when I do ALT + F4.
To be fair, Intel has certainly not got as much as Intel would have liked since the Core i7 5th, but the 10900K and / or 11900K are much faster than the 5775C, they are also much larger dies, especially the 11900K , but as we've noted several times before, that's because they're all using the same 14nm process, or at least a variation of it.
This is how the Core i7-5775C compares to more modern 10th generation models and we have to say that it holds up really well, especially when you look at a 4-core bubble. This L4 cache isn't nearly as useful as you might think, at least not for CPU performance, although it is certainly very useful for GPU performance, which would otherwise be severely constrained by the bandwidth of the PCI Express 3.0 bus.
And that will do it for this weird look back at the Core i7-5775C. Hope you enjoyed this "science" hardware feature. We'll be back for more.