Jump to content

Dual-core

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 200.127.6.198 (talk) at 13:14, 8 June 2005. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

A dual-core CPU combines two independent processors and their respective caches and cache controllers onto a single silicon chip, or integrated circuit. IBM's POWER4 was the first microprocessor to incorporate 2-cores on a single die. Various dual-core CPUs are being developed by companies such as Motorola, Intel and AMD, and are scheduled to appear in consumer products in 2005.

Dual-core CPU technology first became a practical viability in 2001* as 180-nm CMOS process technology became feasible for volume production. At this size, multiple copies of the largest microprocessor architectures could be incorporated onto a single production die. (Alternative uses of this newly available "real estate" include widening the bus and internal registers of existing CPU cores, or incorporating more high-performance cache memory on-chip.)

Conceptual diagram of a dual-core chip, with CPU-local Level 1 caches, and shared, on-chip Level 2 caches.

(* Digital signal processors (DSPs) have utilized dual-core architectures for much longer than high-end general purpose processors. A typical example of a DSP-specific implementation would be a combination of a RISC CPU and a DSP MPU. This allows for the design of products that require a general purpose processor for user interfaces and a DSP for real-time data processing; this type of design is suited to e.g. mobile phones. )

Commercial examples

Architectural class

The dual-core type of processor falls into the architectural class of a tightly-coupled multiprocessor. In this class, a processing unit, with an independent instruction stream executes code from a pool of shared memory. Contention for the memory as a resource is managed by arbitration and by the processing unit specific caches. The localized caches make the architecture viable since modern CPUs are highly optimized to maximize bandwidth to the memory interface. Without them, each CPU would run near 50% efficiency. Multiple caches into the same resource must be managed with a cache coherency protocol.

Beyond dual-core processors, there are examples of chips with multiple cores. Such chips include network processors which may have a large number of cores or microengines that may operate independently on different packet processing tasks within a networking application.

Development motivation

Technical pressures

As CMOS process technologies continue to shrink, the high end constraints on the complexity that can be placed on a single die move back. In terms of CPU designs, the choice becomes adding more functions to the device (e.g. an Ethernet controller, memory controller, or high-speed CPU cache), or adding complexity to increase CPU throughput. Generally speaking, shrinking the features on the IC also means that they can run at lower power and at a higher clock rate.

Various potential architectures contend for the additional "real estate" on the die. One option is to widen the registers and/or the bus interface of an existing processor architecture. Widening the bus interface alone leads to superscalar processor architectures, and widening both usually requires new programming models. Other options include including multiple levels of memory cache, and developing system-on-a-chip solutions.

Commercial incentives

Several business motives drive the development of dual-core architectures. Since multiple-CPU SMP designs have been long implemented using discrete CPUs, the issues regarding implementing the architecture and supporting it in software are well known. Additionally, utilizing a proven processing core design (e.g. Freescale's e700 core) without architectural changes reduces design risk significantly. Finally, the connotations of the terminology "dual-core" (and other multiples) lends itself to marketing efforts.

Additionally, for general-purpose processors, much of the motivation for dual-core processors comes from the increasing difficulty of improving processor performance by increasing the operating frequency (frequency-scaling). In order to continue delivering regular performance improvements for general-purpose processors, manufacturers such as Intel have turned to dual-core designs, sacrificing lower manufacturing costs for higher performance in some applications and systems.

It should be noted that while dual-core architectures are being developed, so are the alternatives. An especially strong contender for established markets is to integrate more peripheral functions into the chip.

Advantages

Proximity of two CPU cores on the same die have the advantage that the cache coherency circuitry can operate at a much higher clock rate than is possible if the signals have to travel off-chip, so combining equivalent CPUs on a single die significantly improves the performance of cache snoop operations.

Assuming that the die can fit into the package, physically, the dual-core CPU designs require much less PCB space than multi-chip SMP designs.

A dual-core processor uses slightly less power than two coupled single-core processors, principally because of the increased power required to drive signals external to the chip and because the smaller silicon process geometry allows the cores to operate at lower voltages.

In terms of competing technologies for the available silicon die area, the dual-core design can make use of proven CPU core library designs and produce a product with lower risk of design error than devising a new wider core design. Also, adding more cache suffers from diminishing returns.

Disadvantages

Dual-core processors require operating system (OS) support to make optimal use of the second computing resource.* Also, making optimal use of multiprocessing in a desktop context requires application software support.

The higher integration of the dual-core chip drives the production yields down and are more difficult to manage thermally than lower density single-chip designs.

From an architectural point of view, ultimately, single CPU designs may make better use of the silicon surface area than multiprocessing cores, so a development commitment to this architecture may carry the risk of obsolescence.

Scaling efficiency is largely dependent on the application or problem set. For example, applications that require processing large amounts of data with low computer-overhead algorithms may find this architecture has an I/O bottleneck, underutilizing the device.

(* Two types of OSes are able to utilize a dual-CPU multiprocessor: partitioned multiprocessing and symmetric multiprocessing (SMP). In a partitioned architecture, each CPU boots into separate segments of physical memory and operate independently; in an SMP OS, processors work in a shared space, executing threads within the OS independently. )

If a dual-core processor has only 1 memory bus (which is often the case) the available memory bandwidth per core is half the one available in a dual-processor mono-core system.

Another issue that has surfaced in recent business development is the controversy over whether dual core processors should be treated as two separate CPUs for software licensing requirements. Typically enterprise server software is licensed per processor, and some software manufacturers feel that dual core processors, while a single CPU, should be treated as two processors and the customer should be charged for two licenses - one for each core. This has been challenged by some since not all dual core processor systems are running Operating Systems that can support the added dual core functionality. This remains an unresolved and thorny issue for software companies and customers.,esto es una verga

See also