Methods of computing square roots
This article presents and explains several methods which can be used to calculate square roots.
Exponential identity
Pocket calculators typically implement good routines to compute the exponential function and the natural logarithm, and then compute the square root of r using the identity
- .
The same identity is used when computing square roots with logarithm tables or slide rules.
Rough estimation
Many of the methods for calculating square roots require an initial seed value. If the initial value is too far from the actual square root, the calculation will be slowed down. It is therefore useful to have a rough estimate, which may be very inaccurate but easy to calculate. One way of obtaining such an estimate for is calculating 3D, where D is the number of digits in r (if r < 1, D will be negative).
A better method of rough estimation is this:
- If D is odd, D = 2n + 1, then use
- If D is even, D = 2n + 2, then use
When working in the binary numeral system (as computers do internally), an alternative method is to use (here D is the number of binary digits).
Babylonian method
A commonly used algorithm for approximating is known as the "Babylonian method" and can be derived from (but predates) Newton's method. This is a quadratically convergent algorithm, which means that the number of correct digits of roughly doubles with each step. It proceeds as follows:
- Start with an arbitrary positive start value (the closer to the root, the better).
- Replace by the average of and r/x.
- Repeat steps 2 and 3, until the desired accuracy is achieved.
It can also be represented as:
- .
This algorithm works equally well in the p-adic numbers, but cannot be used to identify real square roots with p-adic square roots; it is easy, for example, to construct a sequence of rational numbers by this method which converges to in the reals, but to in the 2-adics.
Example
We'll calculate , where r = 125348, to 6 significant figures. Here D, the number of digits in r, is 6. Then:
Therefore, .
Bakhshali approximation
This is a method for finding an approximation to a square root which was described in an ancient manuscript known as the Bakhshali Manuscript. It is equivalent to the Babylonian method applied twice. The original presentation goes as follows: To calculate , let N2 be the nearest perfect square to r. Then, calculate:
This can be also written as:
Example
We'll find .
Digit by digit calculation
This is a method to find each digit of the square root in a sequence. It is much slower than the Babylonian method, but it has several advantages:
- It can be easier for manual calculations.
- Every digit found is known to be correct.
- If the square root has an expansion which terminates, the algorithm will terminate after the last digit is found. It can thus be used to check whether a given integer is a square number.
Napier's bones include an aid for the execution of this algorithm. The Shifting nth-root algorithm is a generalization of this method.
The algorithm works for any base, and naturally, the way it proceeds depends on the base chosen. We will describe the method for the decimal system. It goes as follows:
Write the the decimal expansion of the number. The numbers are laid out in a fashion similar to the long division algorithm, and the final result will appear above the original number. Divide the digits of the number into pairs, starting from the decimal point. Each pair of digits corresponds to a single digit of the result.
For each iteration:
- Bring down the most significant (leftmost) pair of digits not yet used (if all the digits of the number have been used, use the digits 00) and append them to the remainder of the previous step (for the first step, the remainder is 0). This will be the "current value".
- Let s denote the part of the result found so far, ignoring any decimal point (for the first step, s = 0), determine the greatest digit x such that does not exceed the current value (20s + x is simply twice s, with the digit x appended to the right). Place the digit as the next digit of the result.
- Subtract from the current value to form a new remainder.
- If the remainder is zero and there are no more digits to bring down the algorithm has terminated. Otherwise continue with step 1.
Examples
Find the the square root of 152.2756.
1 2. 3 4 \/ 01 52.27 56 x 01 1*1 <= 1 < 2*2 x = 1 01 y = x*x = 1*1 = 1 2x 00 52 22*2 <= 52 < 23*3 x = 2 00 44 y = 2x*x = 22*2 = 44 24x 08 27 243*3 <= 827 < 244*4 x = 3 07 29 y = 24x*x = 243*3 = 729 246x 98 56 2464*4 <= 9856 < 2465*5 x = 4 98 56 y = 246x*x = 2464*4 = 9856 00 00 Algorithm terminates: Answer is 12.34
Find the square root of 2.
1. 4 1 4 2 \/ 02.00 00 00 00 x 02 1*1 <= 2 < 2*2 x = 1 01 y = x*x = 1*1 = 1 2x 01 00 24*4 <= 100 < 25*5 x = 4 00 96 y = 2x*x = 24*4 = 96 28x 04 00 281*1 <= 400 < 282*2 x = 1 02 81 y = 28x*x = 281*1 = 281 282x 01 19 00 2824*4 <= 11900 < 2825*5 x = 4 01 12 96 y = 282x*x = 2824*4 = 11296 2828x 06 04 00 28282*2 <= 60400 < 28283*3 x = 2 Desired accuracy achieved: Answer is 1.4142
Iterative methods for inverse square roots
The following are iterative methods for finding the inverse square root of r, . Once it has been found, we can find by simple multiplication: . The iterations involve only multiplication, and not division; They are therefore faster than the Babylonian method. However, they are not stable; If the initial value is not close to the root, the iterations will diverge away from it rather than converge to it.
- This is found by applying Newton's method to the equation :
- This requires less iterations to converge, but involves more operations per iteration:
Taylor series
If N is an approximation to , a better approximation can be found by using the Taylor series of the square root function:
As an iterative method, the order of convergence is equal to the number of terms used. With 2 terms, it is identical to the Babylonian method; With 3 terms, each iteration takes almost as many operations as the Bakhshali approximation, but converges more slowly. Therefore, this is not a particularly efficient way of calculation.
Continued fraction expansion
Quadratic irrationals (numbers of the form , where a, b and c are integers), and in particular, square roots of integers, have periodic continued fractions. Sometimes we may be interested not in finding the numerical value of a square root, but rather in its continued fraction expansion. The following iterative algorithm can be used for this purpose (r is any natural number which is not a perfect square):
The algorithm terminates when the triplet (mn, dn, an) is identical to one encountered before; The expansion will repeat from then on. The sequence [a0; a1, a2, a3, …] is the continued fraction expansion:
Pell's equation
Pell's equation and its variants yield a method for efficiently finding continued fraction convergents of square roots of integers. However, it can be complicated to execute, and usually not every convergent is generated. The ideas behind the method are as follows:
- If (p, q) is a solution (where p and q are integers) to the equation , then is a continued fraction convergent of , and as such, is an excellent rational approximation to it.
- If (pa, qa) and (pb, qb) are solutions, then so is:
- More generally, if (p1, q1) is a solution, then it is possible to generate a sequence of solutions (pn, qn) satisfying:
The method is as follows:
- Find natural numbers (0 excluded) p1 and q1 such that . This is the hard part; It can be done either by guessing, or by using fairly sophisticated techniques.
- To generate a long list of convergents, iterate:
- To find the larger convergents quickly, iterate:
- In either case, is a rational approximation satisfying:
Approximations that depend on IEEE representation
On computers, a very rapid Newton's method based approximation to the square root can be obtained for floating point numbers when computers use an IEEE (or sufficiently similar) representation.
float fastsqrt(float val) { int tmp = *(int *)&val; tmp -= 1<<23; /* Remove IEEE bias from exponent (-2^23) */ /* tmp is now an appoximation to logbase2(val) */ tmp = tmp >> 1; /* divide by 2 */ tmp += 1<<23; /* restore the IEEE bias from the exponent (+2^23) */ return *(float *)&tmp; }
In the above, the operations to add and remove the IEEE bias can be combined into a single operation. An additional adjustment can be made in the same operation to reduce the maximum relative error. So, the three operations, not including the cast, can be rewritten as:
tmp = (1<<23) + (tmp >> 1) - 1<<22 + m;
Where m is some magic number that corresponds to a calculation involving an initial guess used in the Newton's approximation.
A variant of the above routine can be used to compute the inverse square root instead, which has been attributed to the programmers of the game Doom. As an approximation, it produced a relative error of less than 4%, and in computer graphics it is a very efficient way to normalize a vector.
See also
- Alpha max plus beta min algorithm
- Integer square root
- Shifting nth-root algorithm
- N-th root algorithm