Multiplying Large Numbers with Karatsuba`s Algorithm
Numbers are magic; magic by their properties, their existence, and most importantly, their huge impact on our day-to-day modern life. Everything can be reduced and narrowed down to numbers. In this second part of our series on large numbers, we'll delve more deeply into this magic.
Multiplying Large Numbers with Karatsuba`s Algorithm - Karatsuba's Algorithm (Page 3 of 4 )
This algorithm was discovered by Anatolii Alexeevich Karatsuba in 1960 and published in a joint paper with Yu Ofman in 1962. The idea behind it is to do shifts faster than with the aforementioned method. Here, the necessary time is linear.
Karatsuba's Algorithm can be considered one of the first binary splitting algorithms or what is more commonly known today as "divide-et-impera." Divide and conquer said Julius Caesar and, thus, this is the way Karatsuba's algorithm works too.
For a start every number can have the following form:
x= x1 * Bm + x2
Where B is the base number (in our case 10 as we are working in the decimal system) and m is a number that's smaller than the digit count of the x. Now just do the same for the other number. Let it be y and we're going to have the following:
y= y1 * Bm + y2
Executing in this form the multiplication yields the equation below:
x*y = (x1*y1)* B2m + (x1*y2 + x2*y1)*Bm + x2*y2.
In this form we have four multiplications but each of them is made out of digits shorter than the initial one. There is also an addition, but as I have suggested earlier, the addition can be considered so trivial (hence executed faster) that we can almost ignore the time difference required to finish its execution. However this alone isn't enough. Karatsuba observed that the upper expression can be reduced to only three multiplications; check it out below.
Now we have 3 multiplications for smaller numbers than initially and also 2 additions and two subtractions. Because the addition and subtraction take linear time O(n), multiplication is much faster by this method. Furthermore, we can call Karatsuba's method to multiply for all of the three methods.
As for the m, it's most advised and efficient to make the two created numbers equal, and for this let it be the number of digits present divided by two. By implementing the algorithm in this way, its complexity can be reduced to around O(nlog2/log3) = O(n1,58). There is quite a difference to be observed.
And the result is : X* 102*2 + Z* 102 + Y = 30*104 + 270*102 + 240 = 327240
If you still don't get the idea take a look here. A nice applet shows you the process in great detail. However you can observe that this algorithm serves best when the X and Y are composed of the same number of digits. And if you do some tests you could observe that the Classic Method is faster for smaller numbers. This occurs for numbers less than 2320 (around there to be more exact).
To fight back and overcome this, most of the Large Number Libraries, like the one we are writing now, switches back to the classic method after reaching this border. To implement this you will find that I defined a SWITCH_TO_CLASSIC value and set it to 121.
It turned out to be the best value after some tests. If you want to try it out yourself just change this and run the program for two numbers -- let's say 10K long (1024*10 digits).
#define SWITCH_TO_CLASSIC 121
Now that you have a good grasp of the algorithm you could also take a look at the more sophisticated (however also harder to implement) functions like the Toom-Cook multiplication, the faster Schönhage-Strassen algorithm, or implement this one that we've covered. If your option is the latter - then just turn the page.