Sorting algorithm
In computer science and mathematics, a sorting algorithm is an algorithm that puts elements of a list in a certain order. The most used orders are numerical order and lexicographical order. Efficient sorting is important to optimizing the use of other algorithms (such as search and merge algorithms) that require sorted lists to work correctly; it is also often useful for canonicalizing data and for producing human-readable output.
Since the dawn of computing, the sorting problem has attracted a great deal of research, perhaps due to the complexity of solving it efficiently despite its simple, familiar statement. For example, bubble sort was analyzed as early as 1956.[1] Although many consider it a solved problem, useful new sorting algorithms are still being invented to this day (for example, library sort was first published in 2004). Sorting algorithms are prevalent in introductory computer science classes, where the abundance of algorithms for the problem provides a gentle introduction to a variety of core algorithm concepts such as big O notation, divide-and-conquer algorithms, data structures, randomized algorithms, worst-case, average-case, and best-case analysis, time-space tradeoffs, and lower bounds.
Classification
Sorting algorithms used in computer science are often classified by:
- Computational complexity (worst, average and best behaviour) in terms of the size of the list (n). For typical sorting algorithms good behaviour is O(n log n) and bad behaviour is O(n2). Ideal behaviour for a sort is O(n). Sort algorithms which only use an abstract key comparison operation always need at least O(n log n) comparisons on average;
- Memory usage (and use of other computer resources)
- Stability: stable sorting algorithms maintain the relative order of records with equal keys (i.e. values). That is, a sorting algorithm is stable if whenever there are two records R and S with the same key and with R appearing before S in the original list, R will appear before S in the sorted list.
- Whether or not they are a comparison sort. A comparison sort examines the data only by comparing two elements with a comparison operator.
- General method: insertion, exchange, selection, merging, etc. Exchange sorts include bubble sort and quicksort. Selection sorts include shaker sort and heapsort.
When equal elements are indistinguishable, such as with integers, stability is not an issue. However, assume that the following pairs of numbers are to be sorted by their first coordinate:
(4, 1) (3, 1) (3, 7) (5, 6)
In this case, two different results are possible, one which maintains the relative order of records with equal keys, and one which does not:
(3, 1) (3, 7) (4, 1) (5, 6) (order maintained) (3, 7) (3, 1) (4, 1) (5, 6) (order changed)
Unstable sorting algorithms may change the relative order of records with equal keys, but stable sorting algorithms never do so. Unstable sorting algorithms can be specially implemented to be stable. One way of doing this is to artificially extend the key comparison, so that comparisons between two objects with otherwise equal keys are decided using the order of the entries in the original data order as a tie-breaker. Remembering this order, however, often involves an additional space penalty.
List of sorting algorithms
In this table, n is the number of records to be sorted and k is the number of distinct keys. The columns "Best", "Average", and "Worst" give the time complexity in each case; estimates that do not use k assume k to be constant. "Memory" denotes the amount of auxilary storage needed beyond that used by the list itself. "Cmp" indicates whether the sort is a comparison sort.
Name | Best | Average | Worst | Memory | Stable | Cmp | Method | Other notes |
---|---|---|---|---|---|---|---|---|
Bubble sort | O(n) | — | O(n2) | O(1) | Yes | Yes | Exchanging | Times are for best variant |
Cocktail sort | O(n) | — | O(n2) | O(1) | Yes | Yes | Exchanging | |
Comb sort | O(n log n) | — | O(n log n) | O(1) | No | Yes | Exchanging | |
Gnome sort | O(n) | — | O(n2) | O(1) | Yes | Yes | Exchanging | |
Selection sort | O(n2) | O(n2) | O(n2) | O(1) | No | Yes | Selection | |
Insertion sort | O(n) | — | O(n2) | O(1) | Yes | Yes | Insertion | |
Shell sort | O(nlog(n)) | — | O(nlog2(n)) | O(1) | No | Yes | Insertion | Times are for best variant |
Binary tree sort | O(nlog(n)) | — | O(nlog(n)) | O(1) | Yes | Yes | Insertion | |
Library sort | O(n) | O(nlog(n)) | O(n2) | (1+ε)n | Yes | Yes | Inserting | |
Merge sort | O(nlog(n)) | — | O(nlog(n)) | O(n) | Yes | Yes | Merging | |
In-place merge sort | O(nlog(n)) | — | O(nlog(n)) | O(1) | Yes | Yes | Merging | Times are for best variant |
Heapsort | O(nlog(n)) | — | O(nlog(n)) | O(1) | No | Yes | Selection | |
Smoothsort | O(n) | — | O(nlog(n)) | O(1) | No | Yes | Selection | |
Quicksort | O(nlog(n)) | O(nlog(n)) | O(n2) | O(log n) | No | Yes | Partitioning | Naive variants use O(n) space |
Introsort | O(nlog(n)) | O(nlog(n)) | O(nlog(n)) | O(log n) | No | Yes | Hybrid | |
Pigeonhole sort | O(n+k) | — | O(n+k) | O(k) | Yes | No | Indexing | |
Bucket sort | O(n) | O(n) | O(n2) | O(k) | Yes | No | Indexing | |
Counting sort | O(n+k) | — | O(n+k) | O(n+k) | Yes | No | Indexing | |
Radix sort | O(nk) | — | O(nk) | O(n) | Yes | No | Indexing | |
Patience sorting | O(n) | — | O(nlog(n)) | O(n) | No | Yes | Insertion | Also finds longest increasing subsequences |
This table describes some sorting algorithms that are impractical for real-life use due to extremely poor performance or a requirement for specialized hardware.
Name | Best | Average | Worst | Memory | Stable | Cmp | Other notes |
---|---|---|---|---|---|---|---|
Bogosort | O(n) | O(n × n!) | unbounded | O(1) | No | Yes | |
Stupid sort | O(n) | — | O(n3) | O(1) | Yes | Yes | Memory is O(n2) for the recursive version |
Stooge sort | O(n2.71) | — | O(n2.71) | O(1) | No | Yes | |
Bead sort | O(n) | — | O(n) | — | N/A | No | Requires specialized hardware |
Pancake sorting | O(n) | — | O(n) | — | No | Yes | Requires specialized hardware |
Sorting networks | O(log n) | — | O(log n) | — | Yes | Yes | Requires a custom circuit of size O(nlogn) |
Summaries of some popular sorting algorithms
Bubble sort
Bubble sort is a straightforward and simplistic method of sorting data that is used in computer science education. The algorithm starts at the beginning of the data set. It compares the first two elements, and if the first is greater than the second, it swaps them. It continues doing this for each pair of adjacent elements to the end of the data set. It then starts again with the first two elements, repeating until no swaps have occurred on the last pass. Although simple, this algorithm is highly inefficient and is rarely used except in education. A slightly better variant, cocktail sort, works by inverting the ordering criteria and the pass direction on alternating passes.
Insertion sort
Insertion sort is a simple sorting algorithm that is relatively efficient for small lists and mostly-sorted lists, and often is used as part of more sophisticated algorithms. It works by taking elements from the list one by one and inserting them in their correct position into a new sorted list. In arrays, the new list and the remaining elements can share the array's space, but insertion is expensive, requiring shifting all following elements over by one. Shell sort (see below) is a variant of insertion sort that is more efficient for larger lists.
Shell sort
Shell sort was invented by Donald Shell in 1959. It improves upon bubble sort and insertion sort by moving out of order elements more than one position at a time. One implementation can be described as arranging the data sequence in a two-dimensional array and then sorting the columns of the array using insertion sort. Although this method is inefficient for large data sets, it is one of the fastest algorithms for sorting small numbers of elements (sets with less than 1000 or so elements). Another advantage of this algorithm is that it requires relatively small amounts of memory.
Merge sort
Merge sort takes advantage of the ease of merging already sorted lists into a new sorted list. It starts by comparing every two elements (i.e. 1 with 2, then 3 with 4...) and swapping them if the first should come after the second. It then merges each of the resulting lists of two into lists of four, then merges those lists of four, and so on; until at last two lists are merged into the final sorted list. This is the first of the algorithms described here which scales well to very large lists.
Heapsort
Heapsort is a member of the family of selection sorts. This family of algorithms works by determining the largest (or smallest) element of the list, placing that at the end (or beginning) of the list, then continuing with the rest of the list. Straight selection sort runs in O(n2) time, but Heapsort accomplishes its task efficiently by using a data structure called a heap, which is a binary tree where each parent is larger than either of its children. Once the data list has been made into a heap, the root node is guaranteed to be the largest element. It is removed and placed at the end of the list, then the remaining list is rearranged to maintain certain properties that the heap must satisfy to work correctly.
Quicksort
Quicksort is a divide and conquer algorithm which relies on a partition operation: to partition an array, we choose an element, called a pivot, move all smaller elements before the pivot, and move all greater elements after it. This can be done efficiently in linear time and in-place. We then recursively sort the lesser and greater sublists. Efficient implementations of quicksort (with in-place partitioning) are typically unstable sorts and somewhat complex, but are among the fastest sorting algorithms in practice. Together with its modest O(log n) space usage, this makes quicksort one of the most popular sorting algorithms, available in many standard libraries. The most complex issue in quicksort is choosing a good pivot element; poor choices of pivots can result in drastically slower (O(n2)) performance.
Radix sort
Radix sort is an algorithm that sorts a list of fixed-size numbers of length k in O(n · k) time by treating them as bit strings. We first sort the list by the least significant bit while preserving their relative order using a stable sort. Then we sort them by the next bit, and so on from right to left, and the list will end up sorted. Most often, the counting sort algorithm is used to accomplish the bitwise sorting, since the number of values a bit can have is small.
Memory usage patterns and index sorting
When the size of the array to be sorted approaches or exceeds the available primary memory, so that (much slower) disk or swap space must be employed, the memory usage pattern of a sorting algorithm becomes important, and an algorithm that might have been fairly efficient when the array fit easily in RAM may become impractical. In this scenario, the total number of comparisons becomes (relatively) less important, and the number of times sections of memory must be copied or swapped to and from the disk can dominate the performance characteristics of an algorithm. Thus, the number of passes and the localization of comparisons can be more important than the raw number of comparisons, since comparisons of nearby elements to one another happen at system BUS speed (or, with cacheing, even at CPU speed), which, compared to disk speed, is virtually instantaneous.
For example, the popular recursive quicksort algorithm provides quite reasonable performance with adequate RAM, but due to the recursive way that it copies portions of the array it becomes much less practical when the array does not fit in RAM, because it may cause a number of slow copy or move operations to and from disk. In that scenario, another algorithm may be preferable even if it requires more total comparisons.
One way to work around this problem, which works well when complex records (such as in a relational database) are being sorted by a relatively small key field, is to create an index into the array and then sort the index, rather than the entire array. (A sorted version of the entire array can then be produced with one pass, reading from the index, but often even that is unnecessary, as having the sorted index is adequate.) Because the index is much smaller than the entire array, it may fit easily in memory where the entire array would not, effectively eliminating the disk-swapping problem.
Another technique for overcoming the memory-size problem is to combine two algorithms in a way that takes advantages of the strength of each to improve overall performance. For instance, the array might be subdivided into chunks of a size that will fit easily in RAM (say, a few thousand elements), the chunks sorted using an efficient algorithm (such as quicksort or heapsort), and the results merged as per mergesort. This is more efficient than just doing mergesort in the first place, but it requires less physical RAM (to be practical) than a full quicksort on the whole array.
Techniques can also be combined. For sorting really enormous amounts of data that completely dwarf system memory, even the index may need to be sorted using an algorithm or combination of algorithms designed to perform reasonably with virtual memory, i.e., to reduce the amount of swapping required.
Graphical representations
Microsoft's "Quick" programming languages (such as QuickBASIC and QuickPascal) have a file named "sortdemo" (with extension BAS and PAS for QB and QP, respectively) in the examples folder that provides a graphical representation of several of the various sort procedures described here, as well as performance ratings of each.
Also, a program by Robb Cutler called The Sorter for classic Mac OS performs a similar function. It illustrates Quick sort, Merge sort, Heap sort, Shell sort, Insertion sort, Bubble sort, Shaker sort, Binary Sort, and Selection sort.
Finally, at the University of British Columbia, Jason Harrison has a page graphically demonstrating the activity of various in-place sorts.
Language support
Most languages have built-in support for sorting. These implementations typically use a single algorithm tuned for high-performance in-memory sorting, and for this application it's usually strongly preferred to writing one's own sorting algorithm. Quicksort is a frequent choice. Here are some of the most well-known:
- C includes
qsort()
, a standard library function that can perform an arbitrary comparison sort on an array of objects using a comparison operator passed in as a function pointer. Its implementation is usually, although not required to be, based on quicksort. While flexible, the overhead of invoking the frequently-used comparison operator via a function pointer is often prohibitive as it cannot be inlined. - C++ retains
qsort()
but adds the templated STL functionstd::sort
which can be specialized for particular types of data. In addition to added type safety, it often outperformsqsort()
, particularly when the comparison operation is cheap; if used on multiple types of objects, however, it can use more code space. - The Java class
java.util.Arrays
(available in 1.2 and later) includes a variety of sorting functions specialized to particular primitive data types, as well as a version that allows an arbitrary comparator object to be specified. Implementation notes about the specific implementations used by Sun, including quicksort and mergesort, are available in the API documentation.[2] Other implementations can use other algorithms "so long as the specification itself is adhered to. (For example, the algorithm used bysort(Object[])
does not have to be a mergesort, but it does have to be stable.)" - The .NET Framework supplies the static method
Array.Sort()
which can sort an array of objects using an arbitrary comparator.[3] It additionally supplies aSort()
method on theArrayList
class.[4] The new generic collections do not provide sorting. - Perl provides a built-in
sort
function that can take a comparator function which returns negative, zero, or positive integers to indicate less, equal, or greater, respectively.[5] It used quicksort in version 5.6 and earlier and afterwards used a somewhat slower stable mergesort algorithm.
See also
- Big O notation
- External sorting
- Sorting networks (compare)
- Collation
- Schwartzian Transform
- Wikibooks: Algorithms: Uses sorting a deck of cards with many sorting algorithms as an example
External links and references
- D. E. Knuth, The Art of Computer Programming, Volume 3: Sorting and Searching.
- [6] has explanations and analyses of many of these algorithms.
- [7] has information on many of these algorithms.
- Ricardo Baeza-Yates' sorting algorithms on the Web
- 'Dictionary of Algorithms, Data Structures, and Problems'
- Slightly Skeptical View on Sorting Algorithms Softpanorama page that discusses several classic algorithms and promotes alternatives to quicksort.
- For some slides and PDFs see Manchester university's course notes
- For a repository of algorithms with source code and lectures, see The Stony Brook Algorithm Repository
- Graphical Java illustrations of the Bubble sort, Insertion sort, Quicksort, and Selection sort
- xSortLab - An interactive Java demonstration of Bubble, Insertion, Quick, Select and Merge sorts, which displays the data as a bar graph with commentary on the workings of the algorithm printed below the graph.
- Sorting Algorithms Demo - Java applets that chart the progress of several common sorting algorithms while sorting an array of data using in-place algorithms.
- [8] - An applet visually demonstrating a contest between a number of different sorting algorithms
- The Three Dimensional Bubble Sort- A method of sorting in three or more dimensions (of questionable merit)
- Sorting Algorithms Visualized - Java applet visualizing the different approaches of 22 sorting algorithms (configurable input set size and value distribution)