The Historical past of the Microprocessor and the Private Laptop

The personal computing business as we know it grew out of an environment of enthusiasts, entrepreneurs, and chance. Before personal computers, the mainframe and minicomputer business model was built around a single company that provided an entire ecosystem. from creating the hardware to installing, maintaining, writing the software and training the operators.

This approach would serve its purpose in a world where apparently few computers are required. This made the systems enormously expensive and yet extremely lucrative for the companies involved, as the initial cost and service contract ensured a steady flow of sales. The "big iron" companies were not the first driving force in personal computing because of cost, lack of off-the-shelf software, perceived lack of computer owners, and generous profit margins on mainframe and minicomputer contracts.

It was in this atmosphere that personal computing began with hobbyists looking for creative opportunities that their daily jobs with monolithic systems did not offer. The invention of the integrated microprocessor, DRAM and EPROM circuits would trigger the widespread use of the BASIC high-level language variants, which would lead to the introduction of the GUI and bring computing into the mainstream. The resulting standardization and commercialization of hardware would ultimately make computing relatively affordable for individuals.

Over the next few weeks we will delve deeply into the history of the microprocessor and personal computer, from the invention of the transistor to modern chips that provide power to a variety of connected devices.

1947-1974: foundations

Before the Intel 4004, the first commercial microprocessor

Early personal computing required enthusiasts knowledge of both electrical component assembly (primarily the ability to solder) and machine coding, since software at that point was a bespoke affair where it was even available.

The established commercial market leaders did not take personal computing seriously because of limited input-output functionality and software, lack of standardization, high demands on user skills, and few planned applications. Intel's in-house engineers had campaigned for the company to adopt a personal computing strategy almost as soon as the 8080 was implemented in a much wider range than originally intended. Steve Wozniak asked his employer, Hewlett-Packard, to do the same.

John Bardeen, William Shockley and Walter Brattain at Bell Labs, 1948.

While hobbyists initiated the phenomenon of personal computing, the current situation is largely an extension of the line that began with work by Michael Faraday, Julius Lilienfeld, Boris Davydov, Russell Ohl, Karl Lark-Horovitz, William Shockley, Walter Brattain, and John Bardeen. Robert Gibney and Gerald Pearson, who worked together at Bell Telephone Labs in December 1947 to develop the first transistor (a conjugation of transfer resistance).

Bell Labs continued to be a driving force in the advancement of transistors (notably the Metal Oxide Semiconductor Transistor, or MOSFET in 1959), but granted other companies extensive licenses in 1952 to avoid antitrust sanctions from the US Department of Justice. Forty companies, including General Electric, RCA and Texas Instruments, joined Bell and its manufacturing parent company Western Electric in the fast-growing semiconductor business. Shockley would leave Bell Labs and form Shockley Semi-Conductor in 1956.

The first transistor ever assembled, invented by Bell Labs in 1947

Shockley is an excellent engineer. His caustic personality, combined with his poor management of employees, has doomed the company in a short space of time. Within a year of assembling his research team, he had alienated enough members to trigger the mass exodus of "The Traitorous Eight," which included Robert Noyce and Gordon Moore, two of Intel's future founders, Jean Hoerni, inventor of the planar manufacturing process for transistors and Jay Last. Members of The Eight would form the core of the new Fairchild Semiconductor division of Fairchild Camera and Instrument, a company that became the role model for the Silicon Valley startup.

Fairchild's senior management would increasingly marginalize the new business area as the focus is on winning on high profile transistor contracts such as those in the IBM-built flight systems of the North American strategic bomber XB-70 Valkyrie, the Autonetics flight computer of the Minuteman ICBM system, CDC NASA 6600 supercomputer and Apollo guidance computer.

While hobbyists initiated the phenomenon of personal computing, the current situation is largely an extension of the line that began working on early semiconductors in the late 1940s.

However, profits declined when Texas Instruments, National Semiconductor and Motorola won their share of contracts. By late 1967, Fairchild Semiconductor had become a shadow of its former self as budget cuts and major staff departures began. The amazing R&D acumen did not result in commercial products, and militant factions within management proved counterproductive for the company.

The treacherous eight who quit Shockley to start Fairchild Semiconductor. From left: Gordon Moore, Sheldon Roberts, Eugene Kleiner, Robert Noyce, Victor Grinich, Julius Blank, Jean Hoerni and Jay Last. (Photo © Wayne Miller / Magnum)

Most prominent among those to leave would be Charles Sporck, who revived National Semiconductor, as well as Gordon Moore and Robert Noyce. While over fifty new companies would have originated from the breakup of the Fairchild workforce, none have accomplished as much in such a short space of time as the new Intel Corporation. A single call from Noyce to Arthur Rock, the venture capitalist, resulted in $ 2.3 million seed funding being raised in one afternoon.

The ease with which Intel came into being was due in large part to the stature of Robert Noyce and Gordon Moore. Noyce is largely credited with co-inventing the integrated circuit with Texas Instrument's Jack Kilby, although it is almost certainly borrowed heavily from previous work by James Nall and Jay Lathrop's team at the U.S. Army's Diamond Ordnance Fuze Laboratory (DOFL). who made the first transistor using photolithography and evaporated aluminum interconnects in 1957-59; and Jay Last's integrated circuit team (including the newly acquired James Nall) at Fairchild, whose project manager was Robert Noyce.

First planar IC (photo © Fairchild Semiconductor).

Moore and Noyce would take with them from Fairchild the new self-aligned silicon gate MOS (metal oxide semiconductor) technology, which is suitable for the manufacture of integrated circuits and which was recently developed by Federico Faggin, a borrower of a joint venture between the Italian SGS and Fairchild -Company that was developed. Building on the work of John Sarace's Bell Labs team, Faggin would bring his expertise to Intel once he is permanent in the US.

Fairchild would rightly feel angry about the defect, as well as many employee breakthroughs that got into the hands of others – National Semiconductor in particular. This brain drain wasn't quite as one-sided as it would seem, as Fairchild's first microprocessor, the F8, most likely had its origins in the Olimpia Werke's unrealized C3PF processor project.

At a time when patents had not yet achieved the strategic importance they have today, time to market was paramount, and Fairchild was often too slow to see the importance of its developments. The research and development department became less product-oriented and made considerable resources available for research projects.

Texas Instruments, the second largest integrated circuit manufacturer, has quickly undermined Fairchild's position as the market leader. Fairchild still had a prominent position in the industry, but internally the management structure was chaotic. The quality assurance in production (QA) was poor compared to the industry, with yields of 20% being common.

Over fifty new companies would have originated from the dissolution of the Fairchild workforce. No one has achieved as much in such a short time as the new Intel Corp.

While technical staff turnover increased as Fairchildren moved into a more stable environment, Fairchild's Jerry Sanders moved from aerospace and defense marketing to director of marketing and unilaterally decided to bring a new product to market every week – the "Fifty-Two" plan. The accelerated time to market would doom many of these products to returns of around 1%. An estimated 90% of products that shipped later than planned had defects in the design specification or both. Fairchild's star was about to be eclipsed.

If the stature of Gordon Moore and Robert Noyce enabled Intel to get off to a flying start as a company, the third man to join the team would become both the public face of the company and its driving force. Andrew Grove, born András Gróf in Hungary in 1936, became Intel's Director of Operations despite having little manufacturing experience. The choice seemed confusing on the surface – even considering his friendship with Gordon Moore – since Grove was a research and development scientist with a chemical background at Fairchild and a senior lecturer at Berkeley with no corporate management experience.

The fourth man in the company would define his early marketing strategy. Bob Graham was technically Intel's third employee but had to give his employer three months' notice. The delay in moving to Intel would allow Andy Grove to take on a much larger management role than originally intended.

The first hundred Intel employees pose in front of the company's headquarters in Mountain View, California in 1969.
(Source: Intel / Associated Press)

Graham, an excellent salesman, was seen as one of two outstanding candidates for the Intel management team – the other, W. Jerry Sanders III, was a personal friend of Robert Noyce. Sanders was one of the few Fairchild management executives to stay on after C. Lester Hogan's appointment as CEO (by an angry Motorola).

Sanders' initial confidence to remain Fairchild's top marketing man quickly waned when Hogan was unfazed by Sander's extravagance and his team's unwillingness to accept small contracts ($ 1 million or less). Hogan effectively downgraded Sanders twice within weeks with the successive promotions of Joseph Van Poppelen and Douglas J. O & # 39; Conner over himself. The downgrades did what Hogan intended – Jerry Sanders resigned, and most of Fairchild's key positions were filled by Hogan's former Motorola executives.

Within a few weeks, four other former Fairchild analogs had reached out to Jerry Sanders interested in starting their own business. As originally conceived by the four, the company would produce analog circuits as the Fairchild breakup (or meltdown) encouraged a large number of startups looking to capitalize on the craze for digital circuits. Sanders joined in the understanding that the new company would pursue digital circuits as well. The team would have eight members, including Sanders, Ed Turney, one of Fairchild's top salespeople, John Carey and chip designer Sven Simonssen, as well as the original four members of the analog department, Jack Gifford, Frank Botte, Jim Giles and Larry Stenger.

Advanced Micro Devices, as the company would be called, got off to a rocky start. Intel had secured funding in less than a day because the company was founded by engineers. However, investors were much more reluctant to face a proposal for a semiconductor business led by marketing directors. The first stop in securing AMD's $ 1.75 million initial equity was Arthur Rock, who funded both Fairchild Semiconductor and Intel. Rock declined to invest, as did a number of possible sources of money.

Eventually, Tom Skornia, AMD's newly minted legal representative, arrived at Robert Noyce's. Intel's co-founder would become one of AMD's founding investors. Noyce's name on the investor list added a degree of legitimacy to the business vision that AMD had previously lacked in the eyes of potential investors. Further funding followed, with the revised target of $ 1.55 million being met shortly before close of business on June 20, 1969.

AMD got off to a rocky start. But Intel's Robert Noyce, who became one of the company's founding investors, added a measure of legitimacy to its business vision in the eyes of potential investors.

Starting Intel was a little less complicated, so after securing funding and premises, the company could go straight to business. The first commercial product was also one of the five notable "firsts" in the industry in less than three years that would revolutionize both the semiconductor industry and the face of the computer.

Honeywell, one of the computer manufacturers who lived in the shadow of IBM, reached out to numerous chip manufacturers with a request for a 64-bit static RAM chip.

Intel had already formed two chip manufacturing groups, a MOS transistor team led by Les Vadász and a bipolar transistor team led by Dick Bohn. The bipolar team was the first to get there, and the world's first 64-bit SRAM chip was created in April 1969 by its chief designer H.T. handed over to Honeywell. Chua. The ability to create a successful first-up design on a million dollar contract would only add to Intel's early reputation in the industry.

The first product from Intel, a 64-bit SRAM based on the newly developed Schottky Bipolar technology. (CPU zone)

In accordance with the naming conventions of the time, the SRAM chip was marketed under the part number 3101. Intel, together with practically all of the chip manufacturers at the time, did not market its products to consumers, but to engineers in companies. Part numbers, especially if they had a meaning like the number of transistors, were seen as more potential for their potential customers. Likewise, providing an actual name for the product could mean that the name masks technical defects or a lack of substance. Intel tended to move away from naming numerical parts only when it painfully discovered that numbers could not be copyrighted.

While the bipolar team was deploying the first breakout product for Intel, the MOS team identified the main culprit behind the flaws in its own chips. The silicon gate MOS process required numerous heating and cooling cycles during chip manufacture. These cycles caused fluctuations in the rate of expansion and contraction between the silicon and the metal oxide, resulting in cracks that disrupted the circuitry in the chip. Gordon Moore's solution was to "dope" the metal oxide with impurities to lower its melting point so that the oxide could flow with the cyclic heating. The resulting chip, which had arrived from the MOS team in July 1969 (and an extension of the work done at Fairchild on the 3708 chip), became the first commercial MOS memory chip, the 256-bit 1101.

Honeywell quickly signed up for a successor to the 3101, dubbed the 1102, but early on in its development showed a parallel project, the 1103 under the direction of Vadász with Bob Abbott, John Reed and Joel Karp (who also oversaw the development of the 1102 ), considerable potential. Both were based on a three-transistor memory cell proposed by Honeywell's William Regitz, which promised much higher cell density and lower manufacturing costs. The disadvantage was that the memory would not store any information in a powerless state and the circuits would have to apply (update) a voltage every two milliseconds.

The first MOS memory chip, Intel 1101, and the first DRAM memory chip, Intel 1103. (CPU zone)

At that time, computer random access memory was the province of magnetic core memory chips. This technology became almost obsolete with the introduction of the 1103 DRAM (Dynamic Random Access Memory) chip from Intel in October 1970. When manufacturing defects were fixed early next year, Intel had a sizeable head start in a dominant and rapidly growing market – a head start it benefited from until Japanese memory manufacturers caused a sharp drop in memory prices due to massive capital inflows into production capacity in the early 1980s.

Intel launched a nationwide marketing campaign encouraging magnetic core memory users to call Intel and reduce their system memory expenses by switching to DRAM. At a time when yield and delivery could not be taken for granted, customers inevitably inquired about the supply of the chips from the second source.

Andy Grove was vehemently against second sourcing, but that was Intel's status as a young company that had to meet industry demand. Intel chose a Canadian company, Microsystems International Limited, as its first second source of chip supply, rather than a larger, more experienced company that Intel could dominate with its own product. Intel would gain around $ 1 million from the licensing agreement and continue to win if MIL tried to increase profits by increasing the wafer size (from two inches to three inches) and shrinking the chip. MIL customers turned to Intel when the Canadian company's chips came off the assembly line defective.

Intel launched a nationwide marketing campaign encouraging magnetic core memory users to call Intel and reduce their system memory expenses by switching to DRAM.

Intel's first experiences do not suggest either the entire industry or later problems with second sourcing. AMD's growth was aided directly by becoming a second source for Fairchild's 9300 series transistor-transistor logic (TTL) chips and securing, designing and supplying a custom chip for Westinghouse's military division that Texas Instruments ( the original contractor) found it difficult to produce in time.

Intel's early manufacturing defects using the silicon gate process also resulted in the third and most directly profitable chip, as well as an industry edge in earnings. Intel hired another former Fairchild alumnus, the young physicist Dov Frohmann, to investigate the process problems. What Frohmann suspected was that the gates of some transistors had been disconnected, floating above them, and were trapped in the oxide that separated them from their electrodes.

Frohmann also showed Gordon Moore that these floating gates hold an electrical charge (in some cases many decades) due to the surrounding insulator and can therefore be programmed. In addition, the electrical charge of the floating gate could be dissipated with ionizing ultraviolet radiation, which would delete the programming.

With conventional memory, the programming circuits had to be fixed at the chip manufacturer with fuses built into the design for variations in programming. This method is costly on a small scale, requires many different chips to suit individual purposes, and requires changing the chip when redesigning or redesigning the circuitry.

The EPROM (Erasable, Programmable Read-Only Memory) revolutionized the technology and made memory programming much more accessible and many times faster, as the client did not have to wait for its application-specific chips to be manufactured.

The disadvantage of this technology was that in order for the UV light to erase the chip, a relatively expensive quartz window was built into the chip packaging directly above the ROM chip to allow access to the light. The high cost would later be reduced by the introduction of one-time programmable (OTP) EPROMs, which eliminated the quartz window (and erase function), and electrically erasable, programmable ROMs (EEPROM).

As with the 3101, the initial yields were very poor – mostly less than 1%. The 1702 EPROM required an accurate voltage for memory writes. Deviations in production lead to inconsistent requirements for the write voltage – too little voltage and programming would be incomplete, too much risk of destroying the chip. Joe Friedrich, recently lured away by Philco, and another who had honed their craft at Fairchild, encountered a high negative voltage across the chips before writing any data. Friedrich called the process "walking out" and he would increase the yield from one chip every two wafers to sixty per wafer.

Intel 1702, the first EPROM chip. (computermuseum.li)

Since the exit did not physically change the chip, other manufacturers selling ICs developed by Intel would not immediately discover the reason for Intel's jump in earnings. Those higher returns had a direct impact on Intel's net worth, as sales grew 600% between 1971 and 1973. The returns, which were outstanding compared to the second source companies, gave Intel a distinct advantage over the same parts sold by AMD, National Semiconductor, Sigtronics, and MIL.

ROM and DRAM were two essential components of a system that would become a milestone in the development of personal computing. In 1969 the Nippon Calculating Machine Corporation (NCM) turned to Intel and asked for a twelve-chip system for a new desktop computer. Intel was in the process of developing its SRAM, DRAM, and EPROM chips at this point and was keen to get its first business deals.

NCM's original proposal was for a system that required eight calculator-specific chips, but Intel's Ted Hoff came up with the idea of ​​borrowing from the day's larger minicomputers. Rather than using individual chips for individual tasks, the idea was to create a chip that would handle combined workloads and turn the individual tasks into sub-programs like the larger computers – a general purpose chip. Hoff's idea would reduce the number of chips required to just four – a shift register for input / output, a ROM chip, a RAM chip and the new processor chip.

NCM and Intel signed the new system on February 6, 1970, and Intel received an advance of $ 60,000 on a minimum order of 60,000 kits (with a minimum of eight chips per kit) over three years. The job of getting the processor and its three support chips to work would be entrusted to another dissatisfied Fairchild employee.

Federico Faggin was disaffected that Fairchild was unable to translate his research and development breakthroughs into tangible products before it was used by competitors, and his own position as a manufacturing process engineer, while his main interest initially lay in chip architecture. He contacted Les Vadász at Intel and was invited to lead a design project whose prior knowledge was nothing more than description as "challenging". Faggin was supposed to find out what the 4-chip MCS-4 project meant on April 3, 1970, his first day at work, when he was tutored by engineer Stan Mazor. The next day, Faggin was digged in and met with Masatoshi Shima, the NCM representative, who expected to see the logical design of the processor rather than hearing a draft from a man who had been on the project for less than a day would have.

Intel 4004, the first commercial microprocessor, had 2,300 transistors and ran at a clock rate of 740 kHz. (CPU zone)

Faggin's team, which Shima was now part of for the duration of the design phase, quickly began developing the four chips. The design for the simplest of them, the 4001, was completed in a week, with the layout taking a single draftsman per month to complete. By May, the 4002 and 4003 had been designed and work on the 4004 microprocessor had begun. The first pre-production run was taken off the line in December, but since the important buried contact layer was omitted from manufacture, they were rendered inoperable. A second revision corrected the error and three weeks later all four working chips were ready for the test phase.

The 4004 could have been a footnote in semiconductor history if it had remained a custom part for NCM, but falling consumer electronics prices, especially in the highly competitive desktop market, prompted NCM to turn to Intel and reduce unit prices of the agreed contract to be demanded. Knowing that the 4004 could have many more uses, Bob Noyce proposed a price cut and a $ 60,000 prepayment refund from NCM to allow Intel to market the 4004 as calculators to customers in other markets. This made the 4004 the first commercial microprocessor.

Two other designs of the era were owned by whole systems; The MP944 from Garrett AiResearch was a component of the Central Air Data Computer of the Grumman F-14 Tomcat, which was responsible for optimizing the fighter's wings and glove wings with variable geometry, while the TMS 0100 and 1000 from Texas Instruments were initially only part of Hand calculators were available such as the Bowmar 901B.

The 4004 could have been a footnote in semiconductor history if it had remained a custom part for NCM.

While the 4004 and MP944 required a number of support chips (ROM, RAM and I / O), the Texas Instruments chip combined these functions into a CPU – the world's first microcontroller or "Computer-on-a-Chip" as it was being marketed at the time.

In the Intel 4004

Texas Instruments and Intel would cross-license logic, process, microprocessor and microcontroller IP in 1971 (and again in 1976) that would usher in an era of cross-licensing, joint ventures and patenting as a commercial weapon.

The completion of the NCM (Busicom) MCS-4 system freed up resources for the continuation of a more ambitious project whose origins predated the design of the 4004. In late 1969, Computer Terminal Corporation (CTC, later Datapoint) contacted both Intel and Texas Instruments with a request for an 8-bit terminal controller.

Texas Instruments dropped out fairly early, and Intel's 1201 project development, which began in March 1970, stalled until July when project leader Hal Feeney was co-opted for a static RAM chip project. CTC would eventually opt for an easier then discrete collection of TTL chips as the deadlines approached. The 1201 project would wear off until Seiko showed interest in using it in a desktop machine and Faggin put the 4004 into operation in January 1971.

In today's environment it seems almost incomprehensible that the development of microprocessors should play a secondary role after memory, but in the late 1960s and early 1970s computing was the province of mainframes and minicomputers.

In today's environment it seems almost incomprehensible that the development of microprocessors should play a secondary role after memory, but in the late 1960s and early 1970s computing was the province of mainframes and minicomputers. Fewer than 20,000 mainframes were sold annually worldwide, and IBM dominated this relatively small market (to a lesser extent UNIVAC, GE, NCR, CDC, RCA, Burroughs and Honeywell – the "seven dwarfs" for IBM's "Snow White"). . Meanwhile, the Digital Equipment Corporation (DEC) effectively owned the minicomputer market. Das Intel-Management und andere Mikroprozessorunternehmen konnten nicht sehen, dass ihre Chips den Mainframe und den Minicomputer usurpierten, während neue Speicherchips diese Sektoren in großen Mengen bedienen konnten.

Der 1201 kam ordnungsgemäß im April 1972 an und wurde in 8008 umbenannt, um anzuzeigen, dass er ein Nachfolger des 4004 war. Der Chip hatte einen vernünftigen Erfolg, wurde jedoch durch die Abhängigkeit von 18-poligen Verpackungen behindert, die seine Eingabe-Ausgabe (I /) einschränkten. O) und externe Busoptionen. Da der 8008 relativ langsam ist und immer noch die Programmierung mit der ersten Assemblersprache und dem Maschinencode verwendet, war er noch weit von der Verwendbarkeit moderner CPUs entfernt, obwohl die kürzlich erfolgte Einführung und Vermarktung der 23-Zoll-8-Zoll-Diskette von IBM dem Mikroprozessor zusätzliche Impulse verleihen würde Markt in den nächsten Jahren.

Intellec 8-Entwicklungssystem (computhistory.org.uk)

Intels Bestreben nach einer breiteren Akzeptanz führte dazu, dass die Modelle 4004 und 8008 in die ersten Entwicklungssysteme des Unternehmens integriert wurden, Intellec 4 und Intellec 8, von denen letztere eine wichtige Rolle bei der Entwicklung des ersten mikroprozessororientierten Betriebssystems spielen würden "Was wäre wenn" -Moment in beiden Branchen sowie in der Geschichte von Intel. Das Feedback von Benutzern, potenziellen Kunden und die zunehmende Komplexität rechnerbasierter Prozessoren führten dazu, dass sich der 8008 zum 8080 entwickelte, was schließlich die Entwicklung von PCs ankurbelte.

Dieser Artikel ist die erste Folge einer Serie von fünf. Wenn Ihnen das gefallen hat, lesen Sie weiter und tauchen Sie ein in die Geburt der ersten Personal Computer-Unternehmen. Oder wenn Sie mehr über die Geschichte des Computerbetriebs erfahren möchten, lesen Sie unsere erstaunlichen Serien zur Geschichte der Computergrafik.

Leave a Reply

Your email address will not be published. Required fields are marked *