Z Image Turbo

Content This article introduces the usage of Z-Image-Turbo in conjunction with ComfyUI. Advantages of Z-Image-Turbo: Strong Chinese prompt-following and Chinese character generation capabilities. Requires only 8 inference steps for image generation. With a compact 6B parameter count, it can run on consumer-grade hardware (16GB VRAM) using quantization. Due to network restrictions in certain regions that prevent the use of ComfyUI-Manager for automatic downloads, all file downloads are provided for manual installation. ...

2026-02-04 路 2 min 路 265 words 路 Me

Llamafactory Distributed Training

Content Conduct SFT training using the llamafactory framework on L20*8 servers with Ubuntu 22.04. Utilize both single-node multi-GPU and multi-node multi-GPU modes. Selected base model: Qwen3-32B. Environment Configuration Clone the code repository, set up a new conda environment, and install dependencies. git clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git cd LLaMA-Factory conda activate llamafactory_env pip install -e . pip install -r requirements/metrics.txt -i https://pypi.tuna.tsinghua.edu.cn/simple Prepare SFT data, place it in the data folder, and register it in dataset_info.json. cd ./data Open dataset_info.json and add the dataset, for example: "my_example": { "file_name": "my_example.json" }, # Use Alpaca or ShareGPT format for SFT data. Alpaca format example is used here. # Alpaca format: (where `instruction` and `input` are automatically concatenated with `\n`) [{ "instruction": "Human instruction (required)", "input": "Human input (optional)", "output": "Model response (required)", "system": "System prompt (optional)", "history": [ ["First round instruction (optional)", "First round response (optional)"], ["Second round instruction (optional)", "Second round response (optional)"] ] }] Single-Node Multi-GPU Training # Prepare a yaml file by referring to existing templates in `./examples` and run it. # If using deepspeed for multi-GPU training, specify the number of GPUs via CUDA_VISIBLE_DEVICES. CUDA_VISIBLE_DEVICES=0,1 FORCE_TORCHRUN=1 llamafactory-cli train examples/train_lora/qwen3_30b_lora_sft.yaml # After training, the parameters can be found in the path specified by `output_dir` in the yaml file. After training, you can invoke the LoRA adapter using vLLM. Here is a docker compose template. ...

2026-01-21 路 2 min 路 349 words 路 Me

ComfyUI Guide

Content This guide details the deployment of ComfyUI on Ubuntu 22.04, including the manual installation of ComfyUI Manager for extended functionalities (custom nodes) and the manual installation of custom nodes, specifically tailored for users within China鈥檚 network environment. Official Documentation: ComfyUI Installation First, ensure you have a conda environment on your server and create a new one. # Clone the ComfyUI git repository git clone https://github.com/Comfy-Org/ComfyUI.git # Navigate to the ComfyUI directory, activate the conda environment, and install dependencies cd ComfyUI conda activate comfyui_env pip install -r requirements.txt # Start ComfyUI, specifying the port number and GPU device python main.py --listen --port 10020 --cuda-device 0 Install ComfyUI Manager ```Plain # Change to the custom_nodes subdirectory cd custom_nodes # If your network environment has no restrictions git clone https://github.com/ltdrdata/ComfyUI-Manager.git # If your network environment has restrictions, manually download the [repository](https://github.com/Comfy-Org/ComfyUI-Manager), # unzip it, rename the folder to ComfyUI-Manager, and place it in custom_nodes. # Then, restart ComfyUI. python main.py --listen --port 10020 --cuda-device 0 ``` Install Any Plugins i. If the ComfyUI Manager GUI can download nodes: ii. If the ComfyUI Manager GUI consistently fails to download nodes: ```Plain # Clone the corresponding git repository, rename it, and place it in custom_nodes. git clone https://github.com/some/custom/nodes.git # Navigate into the node's directory and install its dependencies. pip install -r requirements.txt # Restart ComfyUI. # Some commonly used custom nodes: # --Control Net: https://github.com/Fannovel16/comfyui_controlnet_aux # --ComfyUI-Impact-Pack: https://github.com/ltdrdata/ComfyUI-Impact-Pack # --rgthree-comfy: https://github.com/rgthree/rgthree-comfy ``` Quick Start The most widely used text-to-image model is Flux. Setting up a workflow using it is an excellent starting point. Images generated by ComfyUI contain workflow information, which can be directly dragged into the GUI to re-create the workflow. Example 1: Flux + Lora + ControlNet Workflow Example 2: MimicMotion Action Simulation Workflow

2026-01-14 路 2 min 路 292 words 路 Me

Conda Guide

Overview This guide covers installing Conda on Ubuntu 22.04, migrating the Conda path to a data disk, configuring mirror sources (for regions with internet restrictions), and methods for packing environments for offline deployment. Installation Package Use wget or click to download the Installation Package directly. Add Conda to PATH for Persistence # Locate the conda command which conda # Check the conda root installation directory conda info --base # Open ~/.bashrc and add the following line at the bottom export PATH="/home/ubuntu/miniconda3/bin:$PATH" # Apply changes source ~/.bashrc conda init # Restart your terminal session Configure Storage Paths for Environments and Packages # Open ~/.condarc and add the following lines: envs_dirs: - /your/path/to/conda/envs pkgs_dirs: - /your/path/to/conda/pkgs Configure Conda Mirror Sources (China) # Using Aliyun mirrors as an example conda config --add channels https://mirrors.aliyun.com/anaconda/pkgs/main/ conda config --add channels https://mirrors.aliyun.com/anaconda/pkgs/free/ conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/conda-forge/ conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/bioconda/ # Verify the configuration conda config --show channels Create Conda Environments # Create an environment (at a specific path or using the default path) conda create --prefix /your/path/to/conda/envs/my_env python=3.12 conda create -n my_env python=3.12 # Activate the new environment conda activate my_env List Existing Environments conda info --envs Examples: Using pip/uv within Conda # Using pip pip install tqdm -i https://mirrors.aliyun.com/pypi/simple/ # Using uv for faster installation uv pip install tqdm -i https://mirrors.aliyun.com/pypi/simple/ Pack Conda Environments # Pack an existing conda environment: # (Note: Requires conda-pack to be installed) conda pack -n my_env # Transfer the archive to the target server and extract it tar -xvzf my_env.tar.gz -C /your/conda/envs/my_env Dockerizing Local Conda Environments Example Dockerfile: ...

2026-01-07 路 2 min 路 350 words 路 Me

Docker Guide

Overview This guide provides a comprehensive walkthrough for installing Docker on Ubuntu 22.04, migrating the Docker root directory to a data disk, configuring registry mirrors (for regions with internet restrictions), and setting up the NVIDIA Container Toolkit for GPU acceleration. 1. Install Dependencies sudo apt update && sudo apt install -y ca-certificates curl gnupg lsb-release 2. Import Docker GPG Key (Aliyun Mirror) curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/aliyun-docker.gpg 3. Register the Repository echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/aliyun-docker.gpg] https://mirrors.aliyun.com/docker-ce/linux/ubuntu \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null 4. Install Docker Engine sudo apt update sudo apt install -y docker-ce # Docker starts automatically on Ubuntu. For WSL, use: sudo service docker start 5. Verify Installation sudo docker info 6. Manage Docker as a Non-Root User # Create the docker group if it doesn't exist sudo groupadd docker # Add your user to the group sudo usermod -aG docker $USER # Apply group changes without logging out newgrp docker 7. Operational Cheat Sheet # List all containers (including stopped ones) docker ps -a # List all local images docker images # Debug the Docker daemon (useful if the service fails to start) dockerd # Retag an image docker tag <image-name-1>:<tag-1> <image-name-2>:<tag-2> # Remove an image docker rmi <image-id-or-name> # Follow container logs (Ctrl+C to exit) docker logs -f <container-id> # Inspect container metadata docker inspect <container-id> # Force remove a container docker rm -f <container-id> 8. Migrating the Docker Root Directory To prevent the OS drive from filling up, migrate the storage path to a data disk: ...

2026-01-07 路 3 min 路 501 words 路 Me

DeepSeek-671B Distributed Deployment

1. Overview a. This guide describes the deployment of the DeepSeek-671B model across two servers, each equipped with 8x NVIDIA L20 GPUs. The technology stack utilizes Docker for containerization, the vLLM high-performance inference engine, and the Ray distributed computing framework. b. Official Documentation: vLLM-Distributed c. The official tutorial involves complex steps requiring frequent switching between multiple SSH sessions. To simplify the process, this article consolidates and optimizes the official workflow into a systematic, one-stop deployment guide. ...

2026-01-06 路 4 min 路 731 words 路 Me

L20 8-GPU Server Deep Dive: Integrated Deployment Guide for Multimodal AI Systems (LLM + VLM + RAG + ASR + Dify + MinerU)

Overview This guide provides a step-by-step walkthrough for deploying a full-stack multimodal AI system on a single server equipped with 8x NVIDIA L20 GPUs. The stack includes LLM, VLM, Embedding/Reranker (RAG), ASR, Dify (LLM Orchestration Agent Platform), and MinerU (PDF Extraction). VRAM Estimation for LLMs Key Strategy: Since Large Language Model (LLM) performance correlates more strongly with parameter scale (B) than with quantization levels, we prioritize models with higher parameter counts. For this deployment, we selected the int4 AWQ versions of Qwen3-235B and GLM-4.5V-106B to maximize overall intelligence and performance within the available VRAM. ...

2026-01-05 路 3 min 路 612 words 路 Me