Jump to content

Central processing unit

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 195.194.74.154 (talk) at 15:26, 4 October 2005 (Discrete component transistor CPUs). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

You must add a |reason= parameter to this Cleanup template – replace it with {{Cleanup|August 2005|reason=<Fill reason here>}}, or remove the Cleanup template.

A central processing unit (CPU) refers to part of a computer that interprets and carries out, or processes, instructions and data contained in the software. The more generic term processor can be used to refer to a CPU as well; see processor (disambiguation) for other uses of this term. Microprocessors are CPUs that are manufactured on integrated circuits, often as a single-chip package. Since the mid-1970s, these single-chip microprocessors have become the most common and prominent implementations of CPUs, and today the term is almost always applied to this form.

The term "Central processing unit" is, in general terms, a functional description of a certain class of programmable logic machines. This broad definition can easily be applied to many early computers that existed long before the term "CPU" ever came into widespread usage. The term and its acronym have been in use at least since the early 1960s.

History

File:IBM 603 multiplier.jpg
IBM 603 vacuum tube multiplier. Similar units were included as part of early electronic computers.

Prior to the advent of machines that resemble today's CPUs, computers such as ENIAC had to be physically rewired in order to perform different tasks. These machines are often referred to as "fixed program computers" since they had to be physically reconfigured in order to run a different program. The earliest devices that could rightly be called CPUs came with the advent of the stored program computer. The idea of a stored program computer was already present during the design of ENIAC, but was not used in that computer due to speed considerations. Before ENIAC was even completed, on 1945-06-30 mathematician John Von Neumann published the paper entitled First Draft of a Report on the EDVAC, which outlined the design of a stored program computer that would eventually be completed in August 1949. EDVAC was designed to perform a certain number of instructions (or operations) of various types. These instructions could be combined to create useful programs for the EDVAC to run. Significantly, the programs written for EDVAC were stored in high speed computer memory, rather than being specified by the physical wiring of the computer. This overcame a severe limitation of ENIAC, which was the large amount of time and effort it took to reconfigure the computer to perform a new task. With Von Neumann's design, the program, or software, that EDVAC ran could be changed simply by changing the contents of the computer's memory. Satch is a leg-end!

It should be noted that while Von Neumann is most often credited with the design of the stored program computer due to his design of EDVAC, others before him such as Konrad Zuse had suggested similar ideas. Additionally, the so-called Harvard architecture of the Harvard Mark I, which was completed before EDVAC, also utilized a stored-program design using punched paper tape rather than electronic memory. The key difference between the Von Neumann and Harvard architectures is that the latter separates the storage and treatment of CPU instructions and data, while the former uses the same memory space for both. Most modern CPUs are primarily Von Neumann in design, but elements of the Harvard architecture are commonly seen as well.

CPU, core memory, and external bus interface of a PDP-8/I

Discrete component transistor CPUs

Your MUM WHO EVER READS THIS!!!!!!!!!!!!!!!

Microprocessors

The most recent technological improvement that has affected the design and implementation of CPUs came in the mid-1970s with the microprocessor. Since the introduction of the first microprocessor (the Intel 4004) in 1970 and the first widely-used microprocessor (the Intel 8080) in 1974, this class of CPUs has almost completely overtaken all other implementations. While the previous generation of CPUs was integrated as discrete components on one or more circuit boards, microprocessors are manufactured onto compact integrated circuits (ICs), often a single chip. As the ability to construct exceedingly small transistors on an IC has increased, the complexity of and number of transistors in a single CPU has increased dramatically. This trend has been observed by many and is often described by Moore's law, which has proven to be a fairly accurate model of the growth of CPU (and other IC) complexity to date.

While the complexity, size, construction, and general form of CPUs has changed drastically over the past sixty years, it is notable that the basic design and function has not changed much at all. Almost all common CPUs today are still very accurately described as Von Neumann stored program machines. As the aforementioned Moore's law continues to hold true, concerns about the limits of integrated circuit transistor technology have become much more prevalent, causing researchers to investigate new methods of computing such as the quantum computer as well as expand the usage of parallelism and other methods that extend the usefulness of the classical Von Neumann model. Liam is gay

CPU operation

The fundamental operation of most CPUs, regardless of the physical form they take, is to execute a sequence of stored instructions called a program. Herein we are discussing a very common class of CPUs that conform to the Von Neumann architecture. The program is represented by a series of numbers that are kept in some kind of computer memory. There are three steps that nearly all Von Neumann CPUs use in their operation, fetch, decode, and execute.

The first step, or fetch, involves retrieving a sequence of numbers from program memory, whose location is determined by a "program counter." The program counter stores a number that identifes the current location in this sequence. In other words, the program counter marks the CPU's place in the current program. Satch is the man! The numbers that the CPU fetch from memory ultimately instruct it as to how to proceed. In the decode step, the number is broken up into parts that have significance to the CPU. Often, some of the digits of the number (for example, the so-called "high" or "low" bits in a binary CPU's memory) indicate which operation (called an instruction) to perform. The remaining digits in the number usually provide information required for that instruction like, for example, operands for an addition operation. These operands generally come in two forms: a memory address, or a constant number. Some types of instructions manipulate the program counter. These are generally called "jumps" and facilitate behavior like loops, conditional program execution (through the use of a conditional jump), and functions in programs. Many instructions will also change the state of digits in a 'flags' register (a single word of fast memory). These flags can be used to influence how a program behaves, since they often indicate the outcome of various operations. For example, if two numbers are added together and produce a result larger than the CPU is designed to handle, then an arithmetic overflow flag may be set. (see the discussion of bit width below)

After the fetch and decode steps, the execute step is performed. During this step, various portions of the CPU are "connected" (by some switching device, like a relay or a transmission gate) so they can perform the desired operation. If, for instance, an addition operation was requested, an ALU will be connected to a set of inputs and a set of outputs. The inputs provide the numbers to be added, and the outputs will contain the final sum. If, as discussed earlier, the addition operation produces a result too large for the CPU to handle, an overflow flag may also be set.

This process then repeats due to the ever incrementing program counter. In more complex CPUs than the one described here, multiple instructions can be fetched, decoded, and executed simultaneously. This section describes what is generally referred to as a 'single cycle data path,' which in fact is quite common among the simple CPUs used in many electronic devices (often called microcontrollers).

Integer precision

The way a CPU represents numbers is determined by its design. Some early digital computers used the common decimal (base ten) numeral system to internally represent numbers. Other computers have used more exotic numeral systems like ternary (base three). By far, the most common numeral system used in CPUs is the binary (base two) system. Nearly all modern CPUs represent numbers in binary form, each digit being interpreted from some physical quantity such as "high" and "low" voltage.

Related to number representation is the size and precision of numbers that a CPU can represent. In the case of a binary CPU, a 'bit' refers to one significant place in the numbers a CPU deals with. The number of bits (or numeral places) a CPU uses to represent numbers is often called "bit width," "data path width," or "integer precision" when dealing with strictly integer numbers (as opposed to floating point). This number differs between architectures, and often within different parts of the very same CPU. For example, an 8-bit CPU deals with a range of numbers that can be represented by eight binary digits, that is, 28 or 256 discrete numbers. Integer precision can also affect the number of locations in memory the CPU can "address" (locate). For example, if a binary CPU uses 32 bits to represent a memory address, and each memory address represents one octet (8 bits), the maximum quantity of memory that CPU can address is 232 octets, or 4 GiB. This is a very simple view of CPU address space, and many modern designs use much more complex addressing methods in order to locate more memory with the same integer precision.

Higher levels of integer precision require more structures to deal with the additional digits, and therefore more complexity, size, power usage, and generally expense. It is not at all uncommon, therefore, to see 4 or 8 bit microcontrollers used in modern applications, even though CPUs with much higher precision (such as 16, 32, 64, even 128 bit) are available. The simpler microcontrollers are usually cheaper, use less power, and therefore dissipate less heat, all of which can be major design considerations for electronic devices. However, in higher-end applications, the benefits afforded by the extra precision (most often the additional address space) are more significant and often affect design choices.

Design and implementation

Early digital designs

While the fundamental operation of the CPU has changed little over the years, the complexity and methods of implementation have. Being digital devices, all CPUs deal with discrete states and therefore require some kind of switching elements to differentiate between and change these states. In the early days of electromechanical and electronic computers, electrical relays and vacuum tubes (thermionic valves) were commonly used as switching elements. Although these had distinct speed advantages over earlier, purely mechanical designs, they were unreliable for various reasons. For example, building direct current sequential logic circuits out of relays requires additional hardware to cope with the problem of contact bounce. While vacuum tubes don't suffer from contact bounce, they must heat up before becoming fully operational and eventually stop functioning due to the slow contamination of their cathodes that occurs when the tubes are in use. Usually, when a tube failed, the CPU would have to be diagnosed to locate the failing unit so it could be replaced. Therefore, early electronic (vacuum tube based) computers were generally faster, but less reliable than electromechanical (relay based) computers. Tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the (slower, but earlier) Harvard Mark I failed very rarely. In the end, tube based CPUs became dominant because the significant speed advantages afforded generally outweighed the reliability problems.

Early transistor designs

Microelectronic designs

Notable CPU architectures

CPU architectures are subject to market competition. The following lists of CPUs are the results of engineering projects which attempted to gain some competitive advantage. The architectures listed here have found some market niche which is continually subject to reappraisal in the marketplace.

Embedded CPU architectures

Microcomputer/PC CPU architectures

Workstation/Server CPU architectures

Mini/Midrange/Mainframe CPU architectures

Emerging CPU architectures

Historically important CPUs

See also

References

Microprocessor designers:

Others: