Stable Diffusion Demo#
Ryzen AI 1.5 provides preview demos of Stable Diffusion image-generation pipelines. The demos cover Image-to-Image and Text-to-Image using SD 1.5, SD 2.1-base, SD-Turbo, SDXL-Turbo and SD 3.0.
The models for SD 1.5, SD 2.1-base, SD-Turbo, SDXL-Turbo are available for public download. The SD 3.0 models are only available to confirmed Stability AI licensees.
NOTE: Preview features are features which are still undergoing some optimization and fine-tuning. These features are not in their final form and may change as we continue to work in order to mature them into full-fledged features.
Installation Steps#
Ensure the latest version of Ryzen AI and NPU drivers are installed. See Installation Instructions.
Copy the GenAI-SD folder from the RyzenAI installation tree to your working area, and then go to the copied folder. For instance:
xcopy /I /E "C:\Program Files\RyzenAI\1.5.0\GenAI-SD" C:\Temp\GenAI-SD
cd C:\Temp\GenAI-SD
Create a Conda environment for the Stable Diffusion demo packages:
conda update -n base -c defaults conda
conda env create --file=env.yaml
Download the Stable Diffusion models:
Extract the downloaded zip files and copy the models in the
GenAI-SD\models
folder. After installing all the models, theGenAI-SD\models
folder should contain the following subfolders:sd15_controlnet
sd_15
sd_21_base
sd_turbo
sdxl_turbo
Running the Demos#
Activate the conda environment:
conda activate ryzenai-stable-diffusion
Optionally, set the NPU to high performance mode to maximize performance:
xrt-smi configure --pmode performance
Refer to the documentation on xrt-smi configure for additional information.
Image-to-Image with ControlNet#
The image-to-image demo generates images based on a prompt and a control image for a Canny ControlNet. This demo supports SD 1.5 (512x512).
To run the demo, navigate to the GenAI-SD\test
directory and run the following command:
python .\run_sd15_controlnet.py
The demo script uses a predefined prompt and ref\control.png
as the control image. The output image and control image are saved in the generated_images
folder.
The control image can be modified and custom prompts can be provided with the --prompt
option. For instance:
python run_sd15_controlnet.py --prompt "A red bird on a grey sky"
Text-to-Image#
The text-to-image generates images based on text prompts. This demo supports SD 1.5 (512x512), SD 2.1-base (768x768), SD-Turbo (512x512) and SDXL-Turbo (512x512).
To run the demo, navigate to the GenAI-SD\test
directory and run the following commands to run with each of the supported models:
python run_sd.py --model_id 'stable-diffusion-v1-5/stable-diffusion-v1-5' --model_path ..\models\sd_15
python run_sd.py --model_id 'stabilityai/stable-diffusion-2-1-base' --model_path ..\models\sd_21_base
python run_sd.py --model_id 'stabilityai/sd-turbo' --model_path ..\models\sd_turbo
python run_sd_xl.py --model_id 'stabilityai/sdxl-turbo' --model_path ..\models\sdxl_turbo
The demo script uses a predefined prompt for each of the models. The output images are saved in the generated_images
folder.
Custom prompts can be provided with the --prompt
option. For instance:
python run_sd.py --model_id 'stabilityai/stable-diffusion-2-1-base' --model_path ..\models\sd_21_base --prompt "A bouquet of roses, impressionist style"