Floating point operations per second
Computer Performance | ||
---|---|---|
Name | flops | |
megaflop | 106 | |
gigaflop | 109 | |
teraflop | 1012 | |
petaflop | 1015 | |
exaflop | 1018 | |
zettaflop | 1021 | |
yottaflop | 1024 | |
xeraflop | 1027 |
In computing, FLOPS (or flops or flop/s) is an acronym meaning FLoating point Operations Per Second. The FLOPS is a measure of a computer's performance, especially in fields of scientific calculations that make heavy use of floating point calculations, similar to instructions per second. Since the final S stands for "second", conservative speakers consider "FLOPS" as both the singular and plural of the term, although the singular "FLOP" is frequently encountered. Alternatively, the singular FLOP (or flop) is used as an abbreviation for "FLoating-point OPeration", and a flop count is a count of these operations (e.g., required by a given algorithm or computer program). In this context, "flops" is simply the plural rather than a rate.
Computing devices exhibit an enormous range of performance levels in floating-point applications, so it makes sense to introduce larger units than FLOPS. The standard SI prefixes can be used for this purpose, resulting in such units as gigaFLOPS (one billion or 1×109 FLOPS), teraFLOPS (one trillion or 1×1012 FLOPS) and petaFLOPS (one quadrillion or 1×1015 FLOPS). IBM's top supercomputer, dubbed Blue Gene/P, is designed to continuously operate at speeds exceeding one petaFLOPS and, when configured to do so, should be able to reach speeds in excess of three petaFLOPS[1]. NEC's SX-9 supercomputer has a peak processing performance of 839 teraFLOPS and features the world's first vector processor to exceed 100 gigaFLOPS per single core.
A basic calculator performs relatively few FLOPS. Each calculation request to a typical calculator requires only a single operation, so there is rarely any need for its response time to exceed that needed by the operator. Any response time below 0.1 second is perceived as instantaneous by a human operator[citation needed], so a simple calculator needs only about 10 FLOPS. Factoring for human delays in reaction, the actual FLOPS can be far less.
Measuring performance
In order for FLOPS to be useful as a measure of floating-point performance, a standard benchmark must be available on all computers of interest. One example is the LINPACK benchmark.
There are many factors in computer performance other than raw floating-point computation speed, such as I/O performance, interprocessor communication, cache coherence, and the memory hierarchy. This means that supercomputers are in general only capable of a small fraction of their "theoretical peak" FLOPS throughput (obtained by adding together the theoretical peak FLOPS performance of every element of the system). Even when operating on large highly parallel problems, their performance will be bursty, mostly due to the residual effects of Amdahl's law. Real benchmarks therefore measure both peak actual FLOPS performance as well as sustained FLOPS performance.
For ordinary (non-scientific) applications, integer operations (measured in MIPS) are far more common. Measuring floating point operation speed, therefore, does not predict accurately how the processor will perform on just any problem. However, for many scientific jobs such as analysis of data, a FLOPS rating is effective.
Historically, the earliest reliably documented serious use of the Floating Point Operation as metric appears to be AEC justification to Congress for purchasing a Control Data CDC 6600 in the mid-1960s.
The terminology is currently so confusing that until April 24, 2006 U.S. export control was based upon measurement of "Composite Theoretical Performance" (CTP) in millions of "Theoretical Operations Per Second" or MTOPS. On that date, however, the U.S. Department of Commerce's Bureau of Industry and Security amended the Export Administration Regulations to base controls on Adjusted Peak Performance (APP) in Weighted teraFLOPS (WT).
Records
On June 8, 2008, at Los Alamos National Laboratory, an American military supercomputer built by IBM has reached the computing milestone of one petaflop by processing more than 1.026 quadrillion calculations per second. The computer has been named Roadrunner referring to the state bird of New Mexico. [2]
On February 4, 2008, The NSF and the University of Texas opened full scale research runs on an AMD, Sun supercomputer Ranger, the most powerful supercomputing system in the world for open science research, which operates at sustained speeds of half a petaflop.
On October 25, 2007, NEC Corporation of Japan issued a press release announcing its SX series model SX-9, claiming it to be the world's fastest vector supercomputer with a peak processing performance of 839 teraFLOPS. The SX-9 features the first CPU capable of a peak vector performance of 102.4 gigaFLOPS per single core.
On June 26, 2007, IBM announced the second generation of its top supercomputer, dubbed Blue Gene/P and designed to continuously operate at speeds exceeding one petaFLOPS. When configured to do so, it can reach speeds in excess of three petaFLOPS.
In June 2007, Top500.org reported the fastest computer in the world to be the IBM Blue Gene/L supercomputer, measuring a peak of 596 TFLOPS. The Cray XT4 hit second place with 101.7 TFLOPS.
In June 2006, a new computer was announced by Japanese research institute RIKEN, the MDGRAPE-3. The computer's performance tops out at one petaFLOPS, almost two times faster than the Blue Gene/L, but MDGRAPE-3 is not a general purpose computer, which is why it does not appear in the Top500.org list. It has special-purpose pipelines for simulating molecular dynamics.
Distributed computing uses the Internet to link personal computers to achieve a similar effect:
- The entire BOINC averages over 1000 TFLOPS (1 PFLOP) as of March 16, 2008.[3]
- SETI@Home computes data averages more than 265 TFLOPS.[4]
- Folding@Home has reached over 1 PFLOPS[5] as of September 15, 2007.[6] Note, as of March 22, 2007, PlayStation 3 owners may now participate in the Folding@home project. Because of this and high performance GPU clients, Folding@home is now sustaining over 2000 TFLOPS (2053 TFLOPS as of May 8, 2008). See the current stats[7] for details.
- Einstein@Home is crunching more than 150 TFLOPS.[8]
- As of June 2007, GIMPS is sustaining 23 TFLOPS.[9]
- Intel Corporation has recently unveiled the experimental multi-core POLARIS chip, which achieves 1 TFLOPS at 3.2 GHz. The 80-core chip can increase this to 1.8 TFLOPS at 5.6 GHz, although the thermal dissipation at this frequency exceeds 260 watts.
As of 2007, the fastest PC processors (quad-core) perform over 30 GFLOPS.[10] GPUs in PCs are considerably more powerful in pure FLOPS. For example, in the GeForce 8 Series the nVidia 8800 Ultra performs around 576 GFLOPS on 128 Processing elements. This equates to around 4.5 GFLOPS per element, compared with 2.75 per core for the Blue Gene/L. It should be noted that the 8800 series performs only Single precision calculations, and that while GPUs are highly efficient at calculations they are not as flexible as a general purpose CPU.
As of November 2007, the TOP500 list of the most powerful supercomputers (excluding grid computers) is headed by IBM's BlueGene/L System, with just under half a petaflop of processing power.
In May 2008 a collaboration was announced between NASA, SGI and Intel to build a 1 petaflop computer in 2009, scaling up to 10 PFLOPs by 2012.[11]
Cost of computing
Hardware costs:
- 1961: about US$1,100,000,000,000 ($1.1 trillion) per GFLOPS (=US$1,100 per FLOPS); with 1 billion IBM 1620 units @ $64,000 each and a multiplication operation taking 17.7ms [12]
- 1997: about US$30,000 per GFLOPS; with two 16-Pentium-Pro–processor Beowulf cluster computers[13]
- 2000, April: $1,000 per GFLOPS, Bunyip, Australian National University. First sub-US$1/MFlop. Gordon Bell Prize 2000.
- 2000, May: $640 per GFLOPS, KLAT2, University of Kentucky
- 2003, August: $82 per GFLOPS, KASY0, University of Kentucky
- 2006, February: about $1 per GFLOPS in ATI PC add-in graphics card (X1900 architecture) — these figures are disputed as they refer to highly parallelized GPU power.
- 2007, March: about $0.42 per GFLOPS in Ambric AM2045[14].
- 2007, October: about $0.20 per GFLOPS with the cheapest retail Sony PS3 console, at US$400, that runs at a claimed 2 teraFLOPS; these figures represent the processing power of the GPU. The seven CPUs run collectively at a lower 218 GFLOPS.[15]
This trend toward lower and lower cost for the same computing power follows Moore's law.
Operation costs:
In energy cost, according to the Green500 list, as of 2007 the most efficient CPU runs at 357.23 MFLOPS per watt. This translates to an energy requirement of 2.8 watts per GFLOPS, however this energy requirement will be much greater for less efficient CPUs.
Hardware costs for low cost supercomputers may be less significant than energy costs when running continuously for several years. A Playstation 3 (PS3) 40GB (65nm Cell) costs $399 and consumes 135 watts[16] or $118 of electricity each year, conservatively assuming U.S. national average residential electric rates of $0.10/kWh[17] (135 watts / 1000 watts per kW × 24 Hours × 365 days × $0.10 per kWh = $118.26). The operating cost of electricity for 3.5 years ($413) is more than the cost of the PS3. Additional operating costs include air conditioning, space and lighting.
See also
References
- ^ IBM Press Release (2007-06-26). "IBM Triples Performance of World's Fastest, Most Energy-Efficient Supercomputer" (HTML). IBM. Retrieved 2008-01-30.
- ^ http://news.bbc.co.uk/1/hi/technology/7443557.stm
- ^ http://www.boincstats.com/stats/project_graph.php?pr=bo Credit overview] Retrieved on 2008-02-17
- ^ SETI at home
- ^ Folding@home
- ^ Folding@home: September 16, 2007 - September 22, 2007
- ^ Folding@Home
- ^ Einstein@Home - Server Status
- ^ Internet PrimeNet Server Parallel Technology for the Great Internet Mersenne Prime Search
- ^ Tom's Hardware's 2007 CPU Charts
- ^ "NASA collaborates with Intel and SGI on forthcoming petaflops super computers". Heise online. 2008-05-09.
{{cite news}}
: Cite has empty unknown parameter:|coauthors=
(help) - ^ IBM 1961 BRL Report
- ^ Loki and Hyglac
- ^ 204101.qxd
- ^ BBC NEWS | Technology | Sony shows off new PlayStation 3
- ^ 40GB PS3 features 65nm chips, lower power consumption
- ^ Average Retail Price of Electricity, U.S. Government Energy Information Administration
External links
- Current Einstein@Home benchmark
- BOINC projects global benchmark
- Current GIMPS throughput
- Top500.org
- LinuxHPC.org Linux High Performance Computing and Clustering Portal
- WinHPC.org Windows High Performance Computing and Clustering Portal
- Oscar Linux-cluster ranking list by CPUs/types and respective FLOPS
- Information on how to calculate "Composite Theoretical Performance" (CTP)
- Information on the Oak Ridge National Laboratory Cray XT system.
- Infiscale Cluster Portal - Free GPL HPC
- Source code, pre-compiled versions and results for PCs - Linpack, Livermore Loops, Whetstone MFLOPS
- PC CPU Performance Comparisons %MFLOPS/MHz - CPU, Caches and RAM