Running Stable Diffusion on VisionFive 2 (RISC-V)



Using stable-diffusion.cpp + Web Interface (no GPU, no CUDA, fully CPU-based)

This guide explains how to run Stable Diffusion locally on your VisionFive 2 (RISC-V) using
stable-diffusion.cpp — a pure C++ implementation that runs completely on CPU,
and how to serve it through a clean Flask web UI installed in /opt.


⚙️ 1. Prepare the system

Update your packages and install dependencies:

sudo apt update
sudo apt install -y git cmake build-essential libopenblas-dev python3 python3-pip python3-flask

🧩 2. Clone and build stable-diffusion.cpp

cd ~
git clone --recursive https://github.com/leejet/stable-diffusion.cpp
cd stable-diffusion.cpp
mkdir build && cd build

Now compile (optimized for VisionFive 2, without unsupported RISC-V vector extensions):

cmake .. \
  -DCMAKE_BUILD_TYPE=Release \
  -DSD_CUBLAS=OFF \
  -DSD_METAL=OFF \
  -DGGML_RVV=OFF \
  -DGGML_NATIVE=OFF \
  -DGGML_BLAS=ON \
  -DGGML_BLAS_VENDOR=OpenBLAS \
  -DCMAKE_C_FLAGS="-march=rv64gc -mabi=lp64d" \
  -DCMAKE_CXX_FLAGS="-march=rv64gc -mabi=lp64d"

make -j$(nproc)

Then install it globally:

sudo make install
sudo ldconfig

Now the sd command is available system-wide.


🧠 3. Download a quantized model (GGUF format)

This model is lightweight and ideal for CPU-only environments:

mkdir -p ~/sd-models
cd ~/sd-models

wget https://huggingface.co/second-state/stable-diffusion-v1-5-GGUF/resolve/main/stable-diffusion-v1-5-pruned-emaonly-Q4_0.gguf

✅ Model used:
stable-diffusion-v1-5-pruned-emaonly-Q4_0.gguf — quantized for optimal performance on low-power CPUs.


🧪 4. Test your installation

Try generating a test image:

sd -m ~/sd-models/stable-diffusion-v1-5-pruned-emaonly-Q4_0.gguf \
   -p "a small robot painting on a canvas, studio light" \
   -o ~/test.png \
   --steps 15 \
   -t 4

After a few minutes, you should see test.png in your home folder.


🌐 5. Install the Web UI in /opt

We’ll download a Flask-based interface directly from GitHub
to keep everything cleanly stored in /opt.

sudo mkdir -p /opt/sd-server
sudo wget -O /opt/sd-server/sd-server.py https://github.com/kroryan/staff/raw/main/sd-server.py
sudo chmod +x /opt/sd-server/sd-server.py

⚡ 6. Create a global launcher command

To launch the server easily, create the sd-server command:

sudo nano /usr/local/bin/sd-server

Paste this:

#!/bin/bash
python3 /opt/sd-server/sd-server.py "$@"

Save and make it executable:

sudo chmod +x /usr/local/bin/sd-server

Now you can run the web UI from anywhere using:

sd-server

🖥️ 7. Access the Web Interface

Once running, open your browser and visit:

http://<your-visionfive-ip>:8082

You’ll get a minimal web UI where you can:

  • Choose your .gguf model

  • Write a text prompt

  • Wait for generation (with progress indicator)

  • View and download the resulting image

The generated images are saved automatically inside
~/sd-outputs by default.


🔁 8. (Optional) Run automatically on startup

You can make the web server start at boot via systemd.

sudo nano /etc/systemd/system/sd-server.service

Paste this configuration:

[Unit]
Description=Stable Diffusion Web UI Server
After=network.target

[Service]
Type=simple
User=root
WorkingDirectory=/opt/sd-server
ExecStart=/usr/local/bin/sd-server
Restart=on-failure

[Install]
WantedBy=multi-user.target

Then enable and start it:

sudo systemctl daemon-reload
sudo systemctl enable sd-server
sudo systemctl start sd-server

✅ The Stable Diffusion web UI will now launch automatically on every boot.


🧠 Summary

ComponentDescription
Enginestable-diffusion.cpp (C++ CPU backend)
Web UIFlask-based server
Modelstable-diffusion-v1-5-pruned-emaonly-Q4_0.gguf
Install Path/opt/sd-server/sd-server.py
Launch Commandsd-server
Web Port8082
AutostartOptional systemd service

⚙️ Example Workflow

# Start the web server
sd-server

# Open in browser
http://192.168.1.123:8082

# Generate locally:
sd -m ~/sd-models/stable-diffusion-v1-5-pruned-emaonly-Q4_0.gguf \
   -p "a fantasy castle floating above clouds" \
   -o result.png \
   --steps 20 \
   -t 4

🚀 Optional Improvements

  • Add more .gguf models to ~/sd-models

  • Mount /opt/sd-server on an external SSD for faster I/O

  • Enable HTTPS with Nginx reverse proxy

  • Add WebSocket live progress for a professional look


✅ Everything you need is included.
After completing these steps, your VisionFive 2 becomes a fully self-contained Stable Diffusion web appliance — no GPU, no Python frameworks, no external cloud.


This blog its writed with the help of ChatGpt but its content was make by the owner of this blog, ChatGpt only helps to write it better and more detailed.

Comentarios

Entradas populares