from Hacker News

Fast-DLLM: Training-Free Acceleration of Diffusion LLM

by nathan-barry on 10/24/25, 2:50 AM with 4 comments

  • by ProofHouse on 10/24/25, 7:01 AM

    Wait, under everything I’ve read about Diffusion Language Models and demos I’ve also seen and tried, inference is faster than traditional architectures. They state the opposite what gives?