Abstract Scope |
Diffusion models like Stable Diffusion XL (SDXL) excel in generating high-quality images but face challenges in adapting to specific domains with limited data, such as microstructure generation in materials science. This study presents a novel SDXL-based framework, optimized with Low-Rank Adaptation (LoRA) and DreamBooth techniques, to both generate and predict microstructure images across seen and unseen experimental parameters with minimal computational demand. By selectively fine-tuning the UNet and text encoders, with targeted modifications and optimal hyperparameters, our method accurately captures intricate microstructure characteristics. It enables controlled image generation across varied process parameters, such as temperature, annealing time, and cooling methods, thereby reducing the need for additional experimentation. Rigorous evaluations demonstrate that our approach outperforms benchmarks in both image quality and fidelity to real microstructures. This scalable strategy addresses data scarcity and costly experimentation, enabling extensive, high-quality dataset generation with predictive capabilities applicable to broader scientific domains. |