j2kun 15 hours ago

They claim the algorithm "discovered" the new techniques, but the methods described in section 5 do not seem all that novel to me. It smells like it could be "laundering" the literature [1] and reshuffling existing techniques. This is not inherently a bad thing, but I would hope that if it is borrowing existing techniques, the appropriate citation would eventually make it into this paper.

[1]: https://www.argmin.net/p/lore-laundering-machines

  • Q6T46nT668w6i3m 14 hours ago

    You’re not kidding. I just looked. There isn’t anything novel in that section. I assumed from the description they found novel methods but this is standard GPU Gems advice.

  • AlexCoventry 15 hours ago

    In the future, we will all be Jürgen Schmidhuber. :-)

    • hedgehog 9 hours ago

      I hate to break it to you but the original work on that topic was by Schmidhuber & Schmidhuber back in 1963.

  • alyxya 15 hours ago

    There generally aren't new techniques when optimizing something ubiquitous. Instead, there are a lot of ways to apply existing techniques to create new and better results. Most ideas are built on top of the same foundational principles.

    • slashdave 13 hours ago

      I am not sure about that. However, what is clear is that if there is a new technique, it will not be found by this LLM.

      • CapsAdmin 11 hours ago

        It's generally true, isn't it? Otherwise we'd have ground breaking discoveries every day about some new and fastest way to do X.

        The way I see it, mathematicians have been trying (and somewhat succeeding every 5~ years) to prove faster ways to do matrix multiplications since the 1970s. But this is only in theory.

        If you want to implement the theory, you suddenly have many variables you need to take care of such as memory speed, cpu instructions, bit precision, etc. So in practice, an actual implementation of some theory likely have more room to improve. It is also likely that LLM's can help figure out how to write a more optimal implementation.

konradha an hour ago

I've been trying my hand at RL envs for various sparse matrix algorithms in CUDA. It's easy to generate code that "looks good", "novel" and "fast". Escaping the distribution and actually creating novel sequences of instructions or even patterns (has any model come with something as useful as fan-in/fan-out or double buffering patterns that's now ubiquituous?) seems difficult to say the least.

alyxya 15 hours ago

The chart confused me because I expected to see performance numbers of CUDA-L2 compared to the others, but instead it shows a chart showing the speedup percentage of CUDA-L2 over the others. In some sense, the bar chart effectively inverts the performance of torch.matmul and cuBLAS with how much percentage it shows. 0% on the bar chart would only mean equal performance.

stonogo 15 hours ago

Am I reading this wrong, or does this only support FP16 inputs, and compares its performance against an FP32 solver?

bgwalter 15 hours ago

[flagged]

  • krapht 15 hours ago

    This is a standard which few kernels will ever meet. I'd say requiring a numerical proof is the same as requiring no proof at all - because it won't ever happen unless you're validating silicon or something equally expensive.

    • Q6T46nT668w6i3m 14 hours ago

      I guess it depends on your definition of proof but I’d say the reasoning and justifications sections of a TOMS article qualifies and that’s a standard nearly every popular library meets.