15 votes

Mathematicians discover a more efficient way to multiply large numbers

6 comments

  1. [4]
    mb3077
    Link
    This new method is the first known example of the Schönhage-Strassen algorithm. It is important to note that the findings are yet to be peer-reviewed.

    This new method is the first known example of the Schönhage-Strassen algorithm.

    According to the researchers, multiplying two numbers together with a billion digits each by the process of long multiplication would take a computer months to calculate.

    Using the Schönhage-Strassen algorithm, it would take under 30 seconds, and with their new theoretical proof, it would be even quicker – theoretically – and may even represent the fastest multiplication algorithm that's mathematically possible.

    It is important to note that the findings are yet to be peer-reviewed.

    13 votes
    1. [3]
      Hello
      Link Parent
      I'm not sure I understand what's new? Implementations of the Schönhage–Strassen algorithm have existed since the 1970s.

      I'm not sure I understand what's new? Implementations of the Schönhage–Strassen algorithm have existed since the 1970s.

      6 votes
      1. Greg
        Link Parent
        The wording of the article isn't especially clear, but from what I can see in the paper they've got an actual algorithm that achieves the previously theoretical lower limit of O(n logn), compared...

        The wording of the article isn't especially clear, but from what I can see in the paper they've got an actual algorithm that achieves the previously theoretical lower limit of O(n logn), compared to the Schönhage-Strassen algorithm's O(n logn log logn).

        The article confuses matters a bit by talking about the improvement from a naïve O(n2) method to Schönhage-Strassen (which is very significant but, as you say, old news) rather than the tiny (but mathematically interesting) difference that this new algorithm brings.

        11 votes
  2. [2]
    SunSpotter
    Link
    I'm curious about this, maybe someone else here could shed some insight. If the algorithm is only useful for really large numbers, I would assume that means it's actually inefficient for numbers...

    I'm curious about this, maybe someone else here could shed some insight. If the algorithm is only useful for really large numbers, I would assume that means it's actually inefficient for numbers below a certain size? But if so, what is that minimum size?

    And do the conclusions that there are algorithms for n^1.58 and n*log(n) theoretically apply to all calculations where n>=2? It would be really interesting if a more scale independent version of this could be found, but I recognize that if such a thing is possible it would be much harder to discover and prove.

    4 votes
    1. Greg
      (edited )
      Link Parent
      Just for clarity, there are four algorithms mentioned here: "Grade school" long multiplication: O(n2) Karatsuba: O(nlog23), which is roughly O(n1.58) Schönhage-Strassen: O(n logn log logn) The...
      • Exemplary

      Just for clarity, there are four algorithms mentioned here:

      • "Grade school" long multiplication: O(n2)
      • Karatsuba: O(nlog23), which is roughly O(n1.58)
      • Schönhage-Strassen: O(n logn log logn)
      • The as-yet unnamed new one from this paper: O(n logn)

      Before this point we knew that O(n logn) was theoretically possible, but not how to do so - the breakthrough here is that the authors have now figured out how to achieve it in practice.

      If the algorithm is only useful for really large numbers, I would assume that means it's actually inefficient for numbers below a certain size?

      Absolutely correct. If you look at the Java BigInteger source, for example, it gives implementations for all of the first three that I mentioned above to use for different sized inputs.

      But if so, what is that minimum size?

      In practice, there's an informed but arbitrary line drawn based on the number of digits - it depends on the exact implementation, the architecture, and potentially a few other things as well when it comes to actual wall clock execution time. In the enormously unlikely event that you're the person who needs to derive these numbers, you might start with an order of magnitude estimate from theory, but you'll almost definitely refine it empirically by running some real world benchmarks and seeing where the lines cross over - which is why the comments in that Java library are peppered with "This value is found experimentally to work well".

      And do the conclusions that there are algorithms for n^1.58 and n*log(n) theoretically apply to all calculations where n>=2?

      Yes, they do. And with this new paper, they aren't just theoretical conclusions - there's now a working example that we can use!

      To understand why we have the threshold issues even though any of the algorithms will work at any scale, it's important to remember that big O notation is only concerned with growth, not with size. A simple example to hammer this home would be that an algorithm that takes 100,000,000,000,000n operations is O(n), but one that takes exactly n2 operations is O(n2). You can see that the latter grows much more steeply, but that doesn't inherently make it slower because the base size of the former is large; unless you're regularly dealing with n>100,000,000,000,000 it'll be faster to use the second option, but order notation isn't the tool to tell you that. Quickest and easiest way to figure it out? Eyeball the code for any obvious starting points, find out there probably aren't any, and then do the real world benchmarking I mentioned above.

      Hopefully all that made some kind of sense! I'm a little rusty on all of this, so if anything looks off then point it out - it could well be my fault.

      [Edit] Fixed the first mistake I spotted! The 100,000,000,000,000n crossover point is at n>100,000,000,000,000 not n>10,000,000 as I originally put. That would be the case only if it were O(1) complexity with a fixed cost of 100,000,000,000,000 operations. I also now regret typing so many zeros when making a silly enormous number because it's making this whole thing hard to read.

      10 votes