爱科技 爱生活
探索科技世界,分享生活智慧
212025-05

👀跨境商家必看,电话/邮件拼词防错公式🔥



当你试图给老美拼Book的B:
客户:Boy?Dog?Buffalo??(逐渐暴躁🐃

💡北美圈通用生存法则:
A as in Apple/Alpha|B as in Boy/Butter|C as in Charlie/Cat ……
(全套字母公式详见视频)

✅行家操作:
『邮箱是Zulu-Hotel-Alpha@...』
(客户:行云流水啊老铁🤝

快问快答:听到『X as in X-Ray』,你接得住吗?💥”

#北美创业 #24小时接单客服 #独立站文字客服 #电话代接打 #国际电话 #专业客服服务 #电商客服 #独立站电话客服 #英语接单
112023-10

Ubuntu 部署Stable Diffusion WebUI注意事项

目标:在Ubuntu下 运行Stable Diffusion WebUI

配置:Ubuntu 20.04 LTS,48U+32G , Nvidia Tesla P40

lspci
03:00.0 3D controller: NVIDIA Corporation GP102GL [Tesla P40] (rev a1)

系统安装和准备

  • 最小化安装系统 with SSH
  • sudo apt install build-essential //运行cuda安装和驱动必要的编译工具和头文件
  • PyTorch 版本选择注意事项

安装显卡驱动

  • 先禁用掉系统安装时候自带的驱动,再安装P40的驱动
  • sudo nano /etc/modprobe.d/blacklist-nouveau.conf, 添加
    • blacklist nouveau
    • options nouveau modeset=0
  • sudo update-initramfs -u
  • sudo reboot
  • sudo ./NVIDIA-Linux-x86_64-515.105.01.run

安装CUDA

  • 下载对应的版本 https://developer.nvidia.com/cuda-toolkit-archive
  • sudo sh cuda_12.1.0_530.30.02_linux.run
  • 调整环境变量

安装cuDNN

  • https://developer.nvidia.com/rdp/cudnn-download
  • 确认安装了 sudo apt-get install zlib1g
  • 安装cuDNN sudo dpkg -i cudnn-local-repo-ubuntu2204-8.9.4.25_1.0-1_amd64.deb
  • sudo cp /var/cudnn-local-repo-ubuntu2204-8.9.4.25/cudnn-local-72322D7F-keyring.gpg /usr/share/keyrings/
  • 再次安装cuDNN sudo dpkg -i cudnn-local-repo-ubuntu2204-8.9.4.25_1.0-1_amd64.deb
  • 或者 tar xvf cudnn-linux-x86_64-8.9.6.50_cuda12-archive.tar.xz (如果下载的是 tar.xz)
  • $ sudo cp cudnn-*-archive/include/cudnn.h /usr/local/cuda/include
  • $ sudo cp -P cudnn-*-archive/lib/libcudnn* /usr/local/cuda/lib64
  • $ sudo chmod a+r /usr/local/cuda/include/cudnn*.h /usr/local/cuda/lib64/libcudnn*

安装Python前置环境

  • sh Miniconda3-py310_23.5.2-0-Linux-x86_64.sh
  • apt-get install python3.10-venv #因为SD需要用venv自己建立环境,因此需要补充python venv 包依赖
  • conda create -n py310sd python=3.10
  • 激活环境

安装SD

  • git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui
  • ./webui.sh

SD安装完之后 优化

  • source venv/bin/activate
  • pip install -U xformers
  • sudo apt-get -y install libtcmalloc-minimal4 #Google 内存优化
  • sudo apt install ffmpeg
  • 调整启动文件参数 webui-user.sh


# Commandline arguments for webui.py, for example: export COMMANDLINE_ARGS="--medvram --opt-split-attention"
export COMMANDLINE_ARGS="--listen --xformers --enable-insecure-extension-access"


启动正常
(base) ubuntu@02appai:~$ cd stable-diffusion-webui/
(base) ubuntu@02appai:~/stable-diffusion-webui$ ./webui.sh

################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye)
################################################################

################################################################
Running on ubuntu user
################################################################

################################################################
Repo already cloned, using it as install directory
################################################################

################################################################
Create and activate python venv
################################################################

################################################################
Launching launch.py...
################################################################
Using TCMalloc: libtcmalloc_minimal.so.4
Python 3.10.6 (main, Oct 24 2022, 16:07:47) [GCC 11.2.0]
Version: v1.6.0
Commit hash: 5ef669de080814067961f28357256e8fe27544f4
Launching Web UI with arguments: --listen --xformers
Loading weights [6ce0161689] from /home/ubuntu/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
Running on local URL:  http://0.0.0.0:7860


To create a public link, set `share=True` in `launch()`.
Startup time: 68.7s (prepare environment: 30.1s, import torch: 12.8s, import gradio: 6.7s, setup paths: 9.5s, initialize shared: 0.9s, other imports: 5.1s, setup codeformer: 0.5s, load scripts: 0.9s, reload hypernetworks: 0.1s, create ui: 1.3s, gradio launch: 1.0s).
Creating model from config: /home/ubuntu/stable-diffusion-webui/configs/v1-inference.yaml
Applying attention optimization: xformers... done.
Model loaded in 73.8s (load weights from disk: 7.3s, create model: 1.1s, apply weights to model: 61.9s, apply half(): 0.2s, load textual inversion embeddings: 0.4s, calculate empty prompt: 2.9s).

兔兔博客 & 兔兔优选

科技生活,你的网络家园; 科技让生活更美好!

Contact Us微信联系我们

登录

找回密码

注册