The Rise, Fall and Revival of AMD

AMD is one of the oldest developers of microprocessors on a large scale and has been the subject of a polarizing debate among technology enthusiasts for almost 50 years. His story makes for an exciting story – filled with heroic successes, daring mistakes and a thorough shave with rubble. Where other semiconductor companies have come and gone, AMD has weathered many storms and waged numerous battles in boardrooms, courts, and shops.

In this feature, we're going to revisit the company's past, examine the twists and turns to the present, and wonder what this Silicon Valley veteran faces.

The rise to fame and fortune

To start our story, we have to go back in time and travel to America and the late 1950s. After the hard years of World War II, this was the right time and place to experience the cutting edge of technological innovation.

Companies such as Bell Laboratories, Texas Instruments and Fairchild Semiconductor employed the best engineers and manufactured numerous innovations: the bipolar transistor, the integrated circuit and the MOSFET (metal oxide semiconductor field effect transistor).

These young technicians wanted to explore and develop more and more exciting products, but with cautious leaders who were aware of the times when the world was fearful and unstable, the frustration among engineers made them want to go it alone.

In 1968, two Fairchild Semiconductor employees, Robert Noyce and Gordon Moore, left the company and went their own way. N M Electronics opened its doors this summer and was renamed Integrated Electronics – Intel for short – a few weeks later.

Others followed suit, and less than a year later another 8 people left the company to found their own electronics design and manufacturing company: Advanced Micro Devices (AMD, of course).

The group was led by Fairchild's former marketing director, Jerry Sanders. They started by redesigning parts of Fairchild and National Semiconductor, rather than trying to compete directly with Intel, Motorola, and IBM (who spent significant sums of money on research and research) (developing new integrated circuits).

From these humble beginnings to a quick move from Santa Clara to Sunnyvale (Silicon Valley in California), AMD offered products that were characterized by higher efficiency, stress tolerances and speed within a few months. These microchips were developed to meet US military quality standards, which proved to be a significant advantage in the still young computer industry, in which reliability and production consistency were very different.

When Intel released its first 8-bit microprocessor (the 8008) in 1974, AMD was a public company with a portfolio of over 200 products – a quarter of which were proprietary designs, including RAM chips, logic counters and bit shifters. The following year there were a number of new models: its own Am2900 IC family (Integrated Circuit) and the 2 MHz 8-bit Am9080, a reverse engineering copy of Intel's successor to the 8008. The former was a collection of components that there are today fully integrated in CPUs and GPUs, but 35 years ago arithmetic logic units and memory controllers were separate chips.

The obvious plagiarism of Intel's design may seem a bit shocking by today's standards, but was a matter of course in the early days of microchips.

The obvious plagiarism of Intel's design may seem a bit shocking by today's standards, but was a matter of course in the early days of microchips. The CPU clone was eventually renamed 8080A after AMD and Intel signed a mutual license agreement in 1976. You can imagine that this would cost a cent or two, but it was only $ 325,000 ($ 1.65 million in today's dollars).

The deal allowed AMD and Intel to flood the market with ridiculously profitable chips and sell them for just over $ 350, or double the price of "military" purchases. The 8085 (3 MHz) processor followed in 1977 and was soon supplemented by the 8086 (8 MHz). Design and manufacturing improvements led to the 8088 (5 to 10 MHz) being released in 1979, the same year that production began at the AMD facility in Austin, Texas.

When IBM began moving from mainframe systems to personal computers (PCs) in 1982, the company decided to outsource parts rather than developing processors in-house. Intel's 8086, the first x86 processor, was selected with the explicit requirement that AMD act as a secondary source to ensure constant supply to the IBM PC / AT.

In February of this year, a contract was signed between AMD and Intel to manufacture 8086, 8088, 80186 and 80188 processors – not only for IBM, but also for the many proliferating IBM clones (Compaq is only one from them). . AMD also began manufacturing the 16-bit Intel 80286 called the Am286 in late 1982.

This was to become the first truly significant desktop PC processor, and while Intel's models were generally between 6 and 10 MHz, AMDs started at 8 MHz and reached up to 20 MHz. This was undoubtedly the beginning of the struggle for CPU dominance between the two Silicon Valley power plants. What Intel designed, AMD just tried to do better.

During this time, the young PC market was experiencing tremendous growth. Intel found that AMD had offered the Am286 with a significant speed boost over the 80286, and tried to stop AMD. This was done by excluding them from obtaining a license for the next generation 386 processors.

AMD sued, but the arbitration lasted four and a half years, and while the ruling found that Intel was under no obligation to transfer any new product to AMD, the larger chip maker was found to have violated an implied contract of good faith.

The Intel license was refused during a critical period when the IBM PC market rose from 55% to 84%. AMD had no access to new processor specifications and took over five years to convert the 80386 to the Am386. Once completed, it again proved to be more than a match with the Intel model. While the original 386 debuted in 1985 at only 12 MHz and later reached 33 MHz, in 1989 the top-end version of the Am386DX was launched at 40 MHz.

The success of the Am386 was followed by the publication of the highly competitive 40 MHz Am486 from 1993, which offered around 20% more performance than the 33 MHz i486 from Intel for the same price. This should be repeated across the 486 range, and while Intel's 486DX peaked at 100 MHz, AMD (somewhat predictable at the time) offered a faster 120 MHz option. To better illustrate AMD's happiness during this period, the company's sales doubled from just over $ 1 billion in 1990 to well over $ 2 billion in 1994.

In 1995, AMD introduced the Am5x86 processor as the successor to the 486 and offered it as a direct upgrade for older computers.

In 1995, AMD introduced the Am5x86 processor as the successor to the 486 and offered it as a direct upgrade for older computers. The Am5x86 P75 + had a frequency of 150 MHz, whereby the reference performance of the P75 was similar to that of the Pentium 75 from Intel. The plus of + indicated that the AMD chip was slightly faster than the competition in integer math.

To counteract this, Intel changed its naming conventions to distance itself from products from its competitor and other providers. The Am5x86 generated substantial revenue for AMD, both from new sales and from upgrades to 486 machines. As with the Am286, 386 and 486, AMD continued to expand the market size of the parts by offering them as embedded solutions.

In March 1996, the first processor was introduced that was completely developed by AMD's own engineers: the 5k86, which was later renamed K5. The chip was designed to compete with the Intel Pentium and Cyrix 6×86, and strong execution of the project was crucial for AMD. The chip was expected to have a much more powerful floating point unit than Cyrix and was roughly equivalent to the Pentium 100. while the integer performance aimed at the Pentium 200.

In the end, it was a missed opportunity because the project had design and manufacturing problems. As a result, the CPU failed to meet frequency and performance goals and was late in the market, resulting in poor sales.

At that time, AMD had spent $ 857 million on NexGen, a small fabless chip (design only) company with processors manufactured by IBM. AMD's K5 and development K6 had scaling problems at higher clock speeds (~ 150 MHz and higher), while NexGen's Nx686 had already shown a core speed of 180 MHz. After the buyout, the Nx686 became the K6 from AMD and the development of the original chip was handed over to the junkyard.

The K6-2 introduced AMD's 3DNow! SIMD instruction set (single instruction, multiple data).

The rise of AMD reflected the decline of Intel since the early beginnings of the K6 architecture, which competed against Intel's Pentium, Pentium II and (largely revised) Pentium III. The K6 accelerated AMD's success due to its existence and capabilities through a former Intel employee, Vinod Dham (a.k.a. the "father of the Pentium"), who left Intel in 1995 to work at NexGen.

When the K6 hit the shelves in 1997, it was an alternative to the Pentium MMX. The K6 got stronger – from 233 MHz in the first step to 300 MHz for the "Little Foot" revision in January 1998, 350 MHz in the "Chomper" K6-2 from May 1998 and astonishing 550 MHz in September 1998 with the revision " Chomper Extended ".

The K6-2 introduced AMD's 3DNow! SIMD instruction set (single instruction, multiple data). Essentially identical to Intel's SSE, it provided an easier way to access the floating point functions of the CPU. The downside is that programmers had to incorporate the new instruction into every new code, in addition to patches and compilers that had to be rewritten to use the function.

Like the original K6, the K6-2 was much cheaper than the competition and often cost half the Intel Pentium chips. The last iteration of the K6, the K6-III, was a more complicated CPU, and the number of transistors was now 21.4 million – compared to 8.8 million in the first K6 and 9.4 million in the K6-II.

It contained AMD's PowerNow !, which dynamically changed the clock rates depending on the workload. Finally, at clock speeds of 570 MHz, the K6-III was quite expensive to manufacture and had a relatively short lifespan, which was shortened by the arrival of the K7, which was better suited to compete with the Pentium III and beyond.

1999 was the culmination of AMD's golden age – the arrival of the K7 processor with the Athlon brand showed that they really weren't the cheap copycat option anymore.

From 500 MHz, Athlon CPUs used the new slot A (EV6) and a new internal system bus, which was licensed by DEC and operated at 200 MHz and exceeded the 133 MHz Intel offered at that time. The Athlon Thunderbird was introduced in June 2000, a CPU that is valued by many for its overclockability. It included DDR RAM support and a level 2 on-die cache at full speed.

Thunderbird and its successors (Palomino, Thoroughbred, Barton and Thorton) fought Intel's Pentium 4 in the first five years of the millennium, usually at a lower price but always with better performance. Athlon was updated in September 2003 with the K8 (codenamed ClawHammer), better known as Athlon 64, because the x86 instruction set was expanded to include a 64-bit extension.

This episode is usually referred to as AMD's crucial moment. The MHz-at-all-cost approach of Intel's Netburst architecture has been highlighted as a classic example of a dead end in development.

Both sales and operating profit were excellent for such a relatively small company. While AMD wasn't an Intel income level, it was a complete success and hungry for more. But when you're on top of the highest mountain, a lot of effort is required to stay there – otherwise there is only one way.

Paradise lost

There is not a single event responsible for AMD falling out of its high position. A global economic crisis, internal mismanagement, poor financial forecasts, a victim of his own success, Intel's fate and misdeeds all played a role in one way or another.

But let's see how things were in early 2006. The CPU market was full of offerings from AMD and Intel, but the former had the exceptional K8-based Athlon 64 FX series. The FX-60 was a dual core at 2.6 GHz, while the FX-57 was a single core, but ran at 2.8 GHz.

Both were head and shoulders above all, as the reviews at that time showed. They were hugely expensive, with the FX-60 retailing at over $ 1,000, but also Intel's crème-de-la-creme, the Pentium Extreme Edition 955 at 3.46 GHz. AMD also seemed to have the upper hand in the workstation / server market. Opteron chips outperform Intel's Xeon processors.

The problem for Intel was the Netburst architecture – the extremely deep pipeline structure required very high clock speeds to be competitive, which in turn increased power consumption and heat dissipation. The design had reached its limits and was no longer up to date. Intel gave up development and turned to the older Pentium Pro / Pentium M CPU architecture to build a successor to the Pentium 4.

As part of the initiative, the Yonah design for mobile platforms and then the dual-core Conroe architecture for desktops were created in August 2006. Intel had to work so hard that the Pentium name was relegated to low-end budget models and replaced by Core. – 13 years of brand dominance have instantly disappeared.

The move to a low-power, high-throughput chip design proved ideal for a variety of emerging markets, and almost overnight, Intel conquered the mainstream and enthusiast sector. By the end of 2006, AMD had been squeezed out of the CPU summit, but it was a disastrous management decision that pushed it straight down the slope.

Three days before Intel introduced the Core 2 Duo, AMD released a move that had been fully approved by then CEO Hector Ruiz (Sanders had retired 4 years earlier). On July 24, 2006, AMD announced that it intends to add $ 5.4 billion in graphics card maker ATI Technologies (consisting of $ 4.3 billion in cash and loans and $ 1.1 billion) from 58 million shares). The deal was a huge financial gamble that made up 50% of AMD's market capitalization at the time, and although the purchase made sense, the price was absolutely not.

Imageon, ATI's handheld graphics division, was sold to Qualcomm as part of a meager $ 65 million deal. This department is now called Adreno, an anagram of "Radeon" and an integral part of the Snapdragon SoC

ATI was heavily overvalued as it did not (or Nvidia) achieve anything close to this type of earnings. ATI also had no manufacturing facilities – the value was almost entirely based on intellectual property.

AMD finally acknowledged this mistake when it absorbed $ 2.65 billion in write-downs due to an overestimation of ATI's goodwill rating.

To further reinforce management's lack of foresight, Imageon, ATI's handheld graphics division, was sold to Qualcomm as part of a meager $ 65 million deal. This department is now called Adreno, an anagram of "Radeon" and an integral part of the Snapdragon SoC (!).

Xilleon, a 32-bit SoC for digital TV and cable boxes, was shipped to Broadcom for $ 192.8 million.

AMD's response to Intel's updated architecture was not only burning, but also overwhelming. Two weeks after the release of Core 2, AMD's President and COO Dirk Meyer announced the completion of the new AMD processor K10 Barcelona. This would be their crucial step in the server market as it was a full-fledged quad-core CPU, while Intel was only producing dual-core Xeon chips at the time.

The new Opteron chip appeared too much fanfare in September 2007, but instead of stealing Intel's thunder, the party officially stopped discovering a bug that, in rare cases, could cause crashes when nested cache writes are performed. Rarely or not, the TLB bug has stopped AMD's K10 production. In the meantime, a BIOS patch that fixed the problem on outgoing processors would do about 10% less. When the revised "B3 Stepping" CPU was delivered 6 months later, the damage was done to both sales and reputation.

A year later, in late 2007, AMD launched the quad-core K10 design on the desktop market. Until then, Intel was on the rise and had launched the now famous Core 2 Quad Q6600. On paper, the K10 was the superior design – all four cores were on the same chip, unlike the Q6600, which used two separate chips on the same package. However, AMD struggled to achieve the expected clock speeds and the best version of the new CPU was only 2.3 GHz. That was slower, albeit around 100 MHz, than the Q6600, but also a little more expensive.

However, the most puzzling aspect was AMD's decision to launch a new model name: Phenom. Intel switched to Core because Pentium had become synonymous with excessive price and performance and had relatively poor performance. On the other hand, Athlon was a name that computer enthusiasts knew all too well and that had the speed that lived up to its reputation. The first version of Phenom wasn't actually bad – it just wasn't as good as the Core 2 Quad Q6600, a product that was already available, and Intel also had faster deals on the market.

Oddly enough, AMD seemed to be deliberately trying to avoid advertising. They were also absent on the software side of the store. A very strange way of doing business, let alone one that struggles in semiconductor trading. Without considering Intel's anti-competitive actions, however, no review of this era in AMD's history would be complete. At this point, AMD was not only fighting Intel's chips, but also the company's monopolistic activities, which included paying large sums of money from OEMs – billions in total – to actively keep AMD CPUs away from new computers.

In the first quarter of 2007, Intel Dell paid $ 723 million to remain the only provider of its processors and chipsets. This corresponds to 76% of the company's total operating income of $ 949 million. AMD would later reach an agreement of $ 1.25 billion, which is surprisingly low on the surface, but is likely exacerbated by the fact that even at the time of the Intel gimmicks, AMD itself didn't have enough CPUs to its existing customers could deliver.

Not that Intel has to do any of it. In contrast to AMD, they had a rigid long-term objective and a greater variety of products and IPs. They also had cash reserves like no other: by the end of the first decade of the new millennium, Intel had sales of over $ 40 billion and operating income of $ 15 billion. This offered enormous budgets for marketing, research and software development, as well as foundries that were specifically tailored to their own products and schedule. These factors alone made AMD fight for market share.

A multi-billion dollar overpayment for ATI and related loan interest, a disappointing successor to the K8, and problematic chips that came out late were all bitter pills to swallow. But things would get worse.

One step forward, one sideways, any number back

By 2010, the global economy was struggling to recover from the 2008 financial crisis. AMD had ejected its flash memory area along with all of its die-casting foundries a few years earlier – they eventually became GlobalFoundries, which AMD still used its products for some of them. About 10% of the workforce had been laid off, and overall, the savings and cash flow meant that AMD was able to focus entirely on the processor design.

Instead of improving the K10 design, AMD started again with a new structure and in late 2011 the bulldozer architecture was introduced. While the K8 and K10 were real multicore SMT processors (simultaneous multithreaded), the new layout was classified as "clustered multithreading".

AMD followed a common, modular approach with Bulldozer – each cluster (or module) contained two integer processing cores, but these were not completely independent. They shared the L1 instruction and the L2 data caches, the fetch / decode and the floating point unit. AMD even went so far as to drop the Phenom name and fall back on the glory days of the Athlon FX by simply calling the first bulldozer CPUs the AMD FX.

The idea behind all of these changes was to reduce the overall size of the chips and make them more energy efficient. Smaller matrices would improve manufacturing yield and margin, and increasing efficiency would help increase clock rates. The scalable design would also make it suitable for a wider range of markets.

The best model when launched in October 2011, the FX-8510, had 4 clusters, but was marketed as an 8-core CPU with 8 threads. At this point, processors had multiple clock speeds and the base frequency of the FX-8150 was 3.6 GHz with a turbo clock of 4.2 GHz. However, the chip had a size of 315 mm² and a peak power consumption of over 125 W. Intel had already released the Core i7-2600K: It was a conventional 4-core CPU with 8 threads and a performance of up to 3.8 GHz. At 216 mm², it was significantly smaller than the new AMD chip and consumed 30 W less power.

On paper, the new FX should have dominated, but its performance was a bit overwhelming – sometimes the ability to handle a lot of threads shone through, but performance with one thread was often no better than the Phenom range to replace. despite the superior clock speeds.

After investing millions of dollars in bulldozer research and development, AMD would certainly not give up the design, and the purchase of ATI was beginning to bear fruit. In the past decade, AMD's first attempt to develop a combined CPU and GPU package called Fusion was too late and disappointingly weak.

However, the project provided AMD with the means to address other markets. In early 2011, another new architecture called Bobcat was released.

The target group are applications with low power consumption such as embedded systems, tablets and notebooks. It was also the opposite of bulldozers: just a handful of pipelines and not much else. Bobcat received a much-needed update to the Jaguar architecture a few years later, and was selected by Microsoft and Sony in 2013 to power the Xbox One and PlayStation 4.

Although profit margins would be relatively small, as consoles are typically reduced to the lowest possible price, both platforms have sold millions of copies, underscoring AMD's ability to create custom SoCs.

AMD's Bobcat received an update to the Jaguar architecture and was selected by Microsoft and Sony in 2013 to power the Xbox One and PlayStation 4.

AMD has continued to revise the bulldozer design over the years – Piledriver was the first to give us the FX-9550 (a 220 W, 5 GHz monstrosity), but Steamroller and the final version Excavator (introduced in 2011, with products that use it for 4 years) later) focused more on reducing power consumption than something particularly new.

Until then, the name structure for CPUs had become confusing, to say the least. Phenom has long been resigned to history books, and FX had a somewhat bad reputation. AMD abandoned this nomenclature and referred to its Excavator desktop CPUs only as the A series.

The company's graphics area, in which the Radeon products were used, had a similarly mixed fate. AMD kept the ATI brand name until 2010 and exchanged it for its own. With the release of Graphics Core Next (GCN), they also completely rewritten the GPU architecture created by ATI at the end of 2011. This design would take almost 8 years to find its way into consoles, desktops, workstations and servers. It is still used today as an integrated GPU in AMD's so-called APU processors.

GCN processors had immense computing power, but the structure wasn't the easiest to make the most of it. The most powerful version that AMD has ever made, the Vega 20 GPU in the Radeon VII, had 13.4 TFLOPs processing power and 1024 GB / s bandwidth – but in games it simply couldn't reach the same heights as the best from Nvidia.

Radeon products often had a reputation for being hot, loud and very hungry for performance. The initial iteration of GCN that powered the HD 7970 required well over 200W of power at full load – but was made on a relatively large process node, TSMCs 28nm. By the time GCN was fully mature, GlobalFoundries' Vega 10 chips were made on their 14nm node, but the energy requirement was no better since the Radeon RX Vega 64 consumed a maximum of almost 300 W.

Although AMD had a decent product selection, they just weren't as good as they should have been, and they struggled to make enough money.

Business year Revenue (billion US dollars) Gross margin Operating income (in USD million) Net income (in million USD)
2016 4.27 23% -372 -497
2015 4.00 27% -481 -660
2014 5.51 33% -155 -403
2013 5.30 37% 103 -83
2012 5.42 23% -1060 -1180
2011 6.57 45% 368 491

By the end of 2016, the company's balance sheet had suffered a loss for four consecutive years (2012 financial data was impacted by a $ 700 million final write-off from GlobalFoundries). Debt was still high despite the foundries and other branches being sold, and not even the success of the system package on Xbox and PlayStation offered enough help.

At first glance, AMD seemed to be in big trouble.

New stars a-ryze

Since there was nothing left to sell and there were no signs of large investments, AMD could only do one thing: duplication and restructuring. In 2012, they took in two people who would play an important role in reviving the semiconductor company.

Jim Keller, the former senior architect of the K8 series, returned after 13 years of absence and led two projects: one ARM-based design for the server markets, the other a standard x86 architecture with Mike Clark (head) designer of Bulldozer) as chief architect.

Lisa Su, Senior Vice President and General Manager at Freescale Semiconductors, joined him. She held the same position at AMD and, along with President Rory Read at the time, is widely regarded as the cause of the company's entry into non-PC markets, particularly consoles.

Two years after Keller's restoration in AMD's research and development department, CEO Rory Read resigned and SVP / GM advanced. With a PhD in electrical engineering from MIT and research into SOI-MOSFETS (silicon on insulator), Su had the academic background and industrial experience needed to bring AMD back to its heyday. In the world of large-scale processors, however, nothing happens overnight – chip designs at best take several years to be ready for the market. AMD würde den Sturm reiten müssen, bis solche Pläne verwirklicht werden könnten.

Während AMD weiter kämpfte, wurde Intel immer stärker. Die Knoten der Kernarchitektur und des Herstellungsprozesses waren gut ausgereift und erzielten Ende 2016 einen Umsatz von fast 60 Milliarden US-Dollar. Intel verfolgte seit einigen Jahren einen "Tick-tock" -Ansatz für die Prozessorentwicklung: Ein "Tick" wäre eine neue Architektur, während ein "Tick" eine Prozessverfeinerung wäre, typischerweise in Form eines kleineren Knotens .

Trotz der enormen Gewinne und der nahezu vollständigen Marktbeherrschung war jedoch nicht alles gut hinter den Kulissen. Im Jahr 2012 wird Intel voraussichtlich innerhalb von drei Jahren CPUs auf einem hochmodernen 10-nm-Knoten veröffentlichen. Dieser besondere Tipp ist nie passiert – tatsächlich hat die Uhr auch nie wirklich getickt. Ihre erste 14-nm-CPU mit Broadwell-Architektur erschien im Jahr 2015, und der Knoten und das grundlegende Design blieben ein halbes Jahrzehnt lang bestehen.

Die Ingenieure der Gießereien stießen wiederholt auf Ertragsprobleme mit 10 nm und zwangen Intel, die älteren Prozesse und Architekturen jedes Jahr zu verfeinern. Die Taktraten und der Stromverbrauch stiegen immer weiter an, aber es gab keine neuen Designs. vielleicht ein Echo ihrer Netburst-Tage. PC-Kunden hatten frustrierende Entscheidungen: Wählen Sie etwas aus der leistungsstarken Core-Linie, zahlen Sie jedoch einen hohen Preis oder wählen Sie die schwächere und billigere FX / A-Serie.

Aber AMD hatte in aller Stille einen Kartensatz gebaut und im Februar 2016 beim jährlichen E3-Event ihre Hand gespielt. Mit dem mit Spannung erwarteten Doom-Neustart als Ankündigungsplattform wurde die völlig neue Zen-Architektur der Öffentlichkeit vorgestellt.

<iframe allowfullscreen = "" frameborder = "0" height = "390" src = "https://www.youtube.com/embed/-IyrR8KsSfQ?rel=0&showinfo=0&modestbranding=1&vq=hd720&autohide=1&autoplay=1" width = "560" load = "faul" srcdoc = "

">

Über das frische Design wurde nur sehr wenig gesagt, außer Ausdrücken wie "Simultanes Multithreading", "Cache mit hoher Bandbreite" und "Energieeffizientes FinFET-Design". Weitere Details wurden während der Computex 2016 bekannt gegeben, einschließlich des Ziels einer 40% igen Verbesserung gegenüber der Baggerarchitektur.

Zu sagen, dass dies ehrgeizig war, wäre eine Untertreibung

Zu sagen, dass dies ehrgeizig war, wäre eine Untertreibung – insbesondere angesichts der Tatsache, dass AMD bestenfalls mit jeder Überarbeitung des Bulldozer-Designs bescheidene Steigerungen von 10% erzielt hatte.

Es würde weitere 12 Monate dauern, bis der Chip tatsächlich erschien, aber als dies geschah, war AMDs Plan für das lange Schmoren endlich klar.

Jedes neue Hardware-Design benötigt die richtige Software, um es zu verkaufen, aber Multithread-CPUs standen vor einem harten Kampf. Obwohl Konsolen mit 8-Core-CPUs ausgestattet waren, waren die meisten Spiele mit nur 4 noch in Ordnung. Die Hauptgründe waren Intels Marktbeherrschung und das Design des AMD-Chips in Xbox One und PlayStation 4. Die ersteren hatten ihre erste 6-Core-CPU zurückgebracht im Jahr 2010, aber es war enorm teuer (fast 1.100 $). Andere tauchten schnell auf, aber es würde noch sieben Jahre dauern, bis Intel einen wirklich erschwinglichen Hexa-Core-Prozessor, den Core i5-8400, für unter 200 US-Dollar anbot.

The issue with console processors was that the CPU layout consisted of two 4-core CPUs in the same die, and there was high latency between the two sections of the chip. So game developers tended to keep the engine&#39;s threads located on one of the sections, and only use the other for general background processes. Only in the world of workstations and servers there was a need for seriously multi-threaded CPUs — until AMD decided otherwise.

In March 2017, general desktop users could upgrade their systems with one of two 8-core,16-thread CPUs. A completely new architecture clearly deserved a new name, and AMD cast off Phenom and FX, to give us Ryzen.

Neither CPU was particularly cheap: the 3.6 GHz (4 GHz boost) Ryzen 7 1800X retailed at $500, with the 0.2 GHz slower 1700X selling for $100 less. In part, AMD was keen to stop the perception of being the budget choice, but it was mostly because Intel was charging over $1,000 for their 8-core offering, the Core i7-6900K.

Zen took the best from all previous designs and melded them into a structure that focused on keeping the pipelines as busy as possible; and to do this, required significant improvements to the pipeline and cache systems. The new design dropped the sharing of L1/L2 caches, as used in Bulldozer, and each core was now fully independent, with more pipelines, better branch prediction, and greater cache bandwidth.

Reminiscent of the chip powering Microsoft and Sony&#39;s consoles, the Ryzen CPU was also a system-on-a-chip

Reminiscent of the chip powering Microsoft and Sony&#39;s consoles, the Ryzen CPU was also a system-on-a-chip; the only thing it lacked was a GPU (later budget Ryzen models included a GCN processor).

The die was sectioned into two so-called CPU Complexes (CCX), each of which were 4-core, 8-threads. Also packed into the die was a Southbridge processor — the CPU offered controllers and links for PCI Express, SATA, and USB. This meant motherboards, in theory, could be made without an SB but nearly all did, just to expand the number of possible device connections.

All of this would be for nothing if Ryzen couldn&#39;t perform, and AMD had a lot to prove in this area after years of playing second fiddle to Intel. The 1800X and 1700X weren&#39;t perfect: as good than anything Intel had for professional applications, but slower in games.

AMD had other cards to play: a month after the first Ryzen CPUs hit the market, came 6 and 4-core Ryzen 5 models, followed two months later by 4-core Ryzen 3 chips. They performed against Intel&#39;s offerings in the same manner as their larger brothers, but they were significantly more cost effective.

And then came the aces — the 16-core, 32-thread Ryzen Threadripper 1950X (with an asking price of $1,000) and the 32-core, 64-thread EPYC processor for servers. These behemoths comprised two and four Ryzen 7 1800X chips, respectively, in the same package, utilizing the new Infinity Fabric interconnect system to shift data between the chips.

In the space of six months, AMD showed that they were effectively targeting every x86 desktop market possible, with a single, one-size-fits-all design.

A year later, the architecture was updated to Zen+, which consisted of tweaks in the cache system and switching from GlobalFoundries&#39; venerable 14LPP process — a node that was under from Samsung — to an updated, denser 12LP system. The CPU dies remained the same size, but the new fabrication method allowed the processors to run at higher clock speeds.

AMD launched Zen 2: this time the changes were more significant and the term chiplet became all the rage

Another 12 months after that, in the summer of 2019, AMD launched Zen 2. This time the changes were more significant and the term chiplet became all the rage. Rather than following a monolithic construction, where every part of the CPU is in the same piece of silicon (which Zen and Zen+ do), the engineers separated in the Core Complexes from the interconnect system.

The former were built by TSMC, using their N7 process, becoming full dies in their own right — hence the name, Core Complex Die (CCD). The input/output structure was made by GlobalFoundries, with desktop Ryzen models using a 12LP chip, and Threadripper & EPYC sporting larger 14 nm versions.

The chiplet design will be retained and refined for Zen 3, currently penned for release late in 2020. We&#39;re not likely to see the CCDs break Zen 2&#39;s 8-core, 16-thread layout, instead it&#39;ll be a similar improvement as seen with Zen+ (i.e. cache, power efficiency, and clock speed improvements).

It&#39;s worth taking stock with what AMD achieved with Zen. In the space of 8 years, the architecture went from a blank sheet of paper to a comprehensive portfolio of products, containing $99 4-core, 8-thread budget offerings through to $4,000+ 64-core, 128-thread server CPUs.

AMD&#39;s finances have changed dramatically as well: from losses and debts running into the billions, AMD is now on track to clear its loans and post an operating income in excess of $600 million, within the next year. While Zen may not be the sole factor in the company&#39;s financial revival, it has helped enormously.

AMD&#39;s graphics division has seen similar changes in fortune — in 2015, the section was given full independence, as the Radeon Technologies Group (RTG). The most significant development from their engineers came in the form of RDNA, a significant reworking of GCN. Changes to the cache structure, along with adjustments to the size and grouping of the compute units, shifted the focus of the architecture directly towards gaming.

The first models to use this new architecture, the Radeon RX 5700 series, demonstrated the design&#39;s serious potential. This was not lost on Microsoft and Sony, as they both selected Zen 2 and the updated RDNA 2, to power their forthcoming new Xbox and PlayStation 5 consoles.

AMD is quantifiably back to where it was in the Athlon 64 days in terms of architecture development and technological innovation. They rose to the top, fell from grace, and like a beast from mythology, created their own rebirth from the ashes.

Although the Radeon Group hasn&#39;t enjoyed the same level of success as the CPU division, and their graphics cards are perhaps still seen as the "value option," AMD is quantifiably back to where it was in the Athlon 64 days in terms of architecture development and technological innovation. They rose to the top, fell from grace, and like a beast from mythology, created their own rebirth from the ashes.

Looking ahead with caution

It&#39;s perfectly suitable to ask a simple question about AMD: could they return to the dark days of dismal products and no money?

Even if 2020 proves to be an excellent year for AMD and positive Q1 financial results show a 40% improvement to the previous year, $9.4 billion of revenue still puts them behind Nvidia ($10.7 billion in 2019) and light years away from Intel ($72 billion). The latter has a much larger product portfolio, of course, and its own foundries, but Nvidia&#39;s income is reliant almost entirely on graphics cards.

It&#39;s clear that both revenue and operating income need to grow, in order to fully stabilize AMD&#39;s future — so how could this be achieved? The bulk of their income is from what they call the Computing and Graphics segment, i.e. Ryzen and Radeon sales. This will undoubtedly continue to improve, as Ryzen is very competitive and the RDNA 2 architecture will provide a common platform for games that work as well on PCs as they do on next-generation consoles.

Intel&#39;s latest desktop CPUs hold an ever decreasing lead in gaming. They also lack the breadth of features that Zen 3 will offer. Nvidia holds the GPU performance crown, but faces stiff competition in the mid-range sector from Radeons. It&#39;s perhaps nothing more than a coincidence, but even though RTG is a fully independent division of AMD, its revenue and operating income are grouped with the CPU sector — this suggests that their graphics cards, while popular, do not sell in the same quantities as their Ryzen products do.

Possibly a more pressing issue for AMD is that their Enterprise, Embedded and Semi-Custom segment accounted for just under 20% of the Q1 2020 revenue, and ran at an operating loss. This may be explained by the fact that current-gen Xbox and PlayStation sales have stagnated, in light of the success of Nintendo&#39;s Switch and forthcoming new models from Microsoft and Sony. Intel has also utterly dominated the enterprise market and nobody running a multi-million dollar datacenter is going to throw it all out, just because an amazing new CPU is available.

But this could change over the next couple of years, partly through the new game consoles, but also from an unexpected alliance. Nvidia, of all companies, picked AMD over Intel as the choice of CPU for their new deep learning/AI compute clusters, the DGX 100. The reason is straightforward: the EPYC processor has more cores and memory channels, and faster PCI Express lanes than anything Intel has to offer.

If Nvidia is happy to use AMD&#39;s products, others will certainly follow suit. AMD will have to keep climbing a steep mountain, but today it&#39;d appear they have the right tools for the job. As TSMC continues to tweak and refine its N7 process node, all AMD chips made using the process are going to be incrementally better, too.

Looking forward, there are a few areas within AMD that could use genuine improvement. One such area is marketing. The &#39;Intel Inside&#39; catchphrase and jingle have been ubiquitous for over 30 years, and while AMD spends some money promoting Ryzen, ultimately they need makers such as Dell, HP, and Lenovo to sell units sporting their processors in the same light and specifications, as they do with Intel&#39;s.

On the software side, there&#39;s been plenty of work on applications that enhance users&#39; experience, such as Ryzen Master, but it was just recently that Radeon drivers were having widespread problems. Gaming drivers can be hugely complex, but the quality of them can make or break the reputation of a piece of hardware.

AMD is currently in the strongest position that they&#39;ve ever been in its 51-year history. With the ambitious Zen project showing no signs of hitting any limits soon, the company&#39;s phoenix-like rebirth has been a tremendous success. They&#39;re not at the top of the mountain though, and perhaps for the better. It&#39;s said that history always repeats itself, but let&#39;s hope that this doesn&#39;t come to pass. A healthy and competitive AMD, fully able to meet Intel and Nvidia head-on, only brings benefits to users.

What are your thoughts on AMD and its trials and tribulations — did you own a K6 chip, or perhaps an Athlon? What&#39;s your favorite Radeon graphics card? Which Zen-based processor are you most impressed by? Share your comments in the section below.

Leave a Reply

Your email address will not be published. Required fields are marked *