Mingyuan Zhou, University of Texas at Austin
Diffusion-based models, key to advancing generative AI with their photorealistic outputs, face a major hurdle: slow generation speed. Our Score identity Distillation (SiD) method challenges the belief that quality in diffusion models requires iterative refinement, offering a groundbreaking solution. SiD streamlines the generative process into a single, swift step, achieving rapid improvements in Fréchet Inception Distance (FID) during the distillation process and, in many cases, exceeding the quality of the original models, which require extensive steps, from dozens to hundreds. By reinterpreting the forward diffusion process with semi-implicit distributions and three novel score-based identities, we introduce a unique loss mechanism. This allows for quick FID reductions by training the generator with its synthesized images, eliminating the need for real data or conventional reverse diffusion, all within a significantly reduced generation timeframe. Evaluated across four benchmark datasets, SiD demonstrates unparalleled efficiency and superior quality compared to current generative methods, setting new standards for diffusion model distillation and expanding the potential of diffusion-based generation. This innovation makes high-quality generative processes more accessible and feasible for various applications, opening new research and application avenues in generative AI.
3:30pm - Pre-talk meet and greet teatime - 219 Prospect Street, 13 floor, there will be light snacks and beverages in the kitchen area.