热门角色不仅是灵感来源,更是你的效率助手。通过精挑细选的角色提示词,你可以快速生成高质量内容、提升创作灵感,并找到最契合你需求的解决方案。让创作更轻松,让价值更直接!
我们根据不同用户需求,持续更新角色库,让你总能找到合适的灵感入口。
本提示词专为Python开发环境配置场景设计,能够根据用户指定的依赖包和配置要求,生成详细、准确的环境搭建指导方案。提示词采用分步式技术文档风格,涵盖虚拟环境创建、依赖管理、环境验证等关键环节,确保配置过程的完整性和可操作性。适用于项目初始化、团队协作、CI/CD流水线等多种开发场景,帮助开发者快速搭建符合项目要求的Python运行环境。
目标:为一个基于 FastAPI 的Python项目搭建可重复、可移植的开发环境,满足以下约束:
方案要点:
在继续之前,请完成以下检查。
python3 --version
py --list
# 查看是否有 3.11
py -3.11 -V
# 如未安装,可用 winget 安装(需要管理员或有 winget)
winget install -e --id Python.Python.3.11
brew install python@3.11
sudo apt-get update
sudo apt-get install -y python3-venv
winget install -e --id Microsoft.VisualStudio.2022.BuildTools
xcode-select --install
sudo apt-get update
sudo apt-get install -y build-essential
mkdir -p ~/projects/fastapi-app && cd ~/projects/fastapi-app
New-Item -ItemType Directory -Path "$env:USERPROFILE\projects\fastapi-app" -Force | Out-Null
Set-Location "$env:USERPROFILE\projects\fastapi-app"
# 如有 3.11
if command -v python3.11 >/dev/null 2>&1; then
python3.11 -m venv .venv
else
python3 -m venv .venv # 确保是 3.10+
fi
source .venv/bin/activate
python -V # 确认 3.10.x 或 3.11.x
# 尝试 3.11,若失败可改为 -3.10
py -3.11 -m venv .venv
.\.venv\Scripts\Activate.ps1
python -V # 确认 3.10.x 或 3.11.x
# 若遇到执行策略限制,可临时放开:
# Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass
python -m pip install --upgrade pip setuptools wheel
python -m pip install --upgrade pip setuptools wheel
fastapi==0.115.0
uvicorn[standard]>=0.30,<1.0
pydantic>=2.7,<3.0
sqlalchemy>=2.0,<3.0
alembic~=1.13
python-dotenv~=1.0
httpx~=0.27
-r requirements.txt
pytest~=8.3
pre-commit~=3.7
python -m pip install -r requirements-dev.txt
python -m pip check
python -m pip install -r requirements-dev.txt
python -m pip check
repos:
- repo: https://github.com/psf/black
rev: 24.8.0
hooks:
- id: black
language_version: python3
- repo: https://github.com/charliermarsh/ruff-pre-commit
rev: v0.6.9
hooks:
- id: ruff
- id: ruff-format
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.6.0
hooks:
- id: check-merge-conflict
- id: end-of-file-fixer
- id: trailing-whitespace
pre-commit install
pre-commit run --all-files
pre-commit install
pre-commit run --all-files
alembic --version
alembic init migrations
alembic --version
alembic init migrations
sqlalchemy.url = sqlite:///./app.db
alembic revision -m "init" --autogenerate
alembic upgrade head
alembic revision -m "init" --autogenerate
alembic upgrade head
from fastapi import FastAPI
from pydantic import BaseModel
from dotenv import load_dotenv
import os
load_dotenv() # 读取 .env(如存在)
app = FastAPI(title="HealthCheck App")
class Echo(BaseModel):
msg: str
@app.get("/health")
def health():
return {"status": "ok", "env": os.getenv("APP_ENV", "dev")}
@app.post("/echo")
def echo(body: Echo):
return {"echo": body.msg}
APP_ENV=dev
uvicorn main:app --reload --host 0.0.0.0 --port 8000
uvicorn main:app --reload --host 127.0.0.1 --port 8000
curl http://127.0.0.1:8000/health
curl -X POST http://127.0.0.1:8000/echo -H "Content-Type: application/json" -d '{"msg":"hello"}'
Invoke-RestMethod -Uri http://127.0.0.1:8000/health
Invoke-RestMethod -Method Post -Uri http://127.0.0.1:8000/echo -ContentType "application/json" -Body '{"msg":"hello"}'
from fastapi.testclient import TestClient
from main import app
client = TestClient(app)
def test_health():
r = client.get("/health")
assert r.status_code == 200
assert r.json()["status"] == "ok"
def test_echo():
r = client.post("/echo", json={"msg": "hi"})
assert r.status_code == 200
assert r.json()["echo"] == "hi"
pytest -q
pytest -q
python -V
pip -V
pip check
python -V
pip -V
pip check
python -c "import fastapi, uvicorn, pydantic, sqlalchemy, alembic, dotenv, httpx, pytest, pre_commit; print('imports-ok')"
python -c "import fastapi, uvicorn, pydantic, sqlalchemy, alembic, dotenv, httpx, pytest, pre_commit; print('imports-ok')"
alembic current
alembic current
pre-commit run --all-files --show-diff-on-failure
pre-commit run --all-files --show-diff-on-failure
py -3.11 -m venv .venv
brew install python@3.11
sudo apt-get install -y python3-venv
python -m pip install --upgrade pip
# Linux 若缺编译工具
sudo apt-get install -y build-essential
python -m pip install --upgrade pip
winget install -e --id Microsoft.VisualStudio.2022.BuildTools
python -m pip install "uvicorn>=0.30,<1.0"
python -m pip install "uvicorn>=0.30,<1.0"
pre-commit run --all-files --show-diff-on-failure -v
pre-commit run --all-files --show-diff-on-failure -v
uvicorn main:app --reload --port 8001
uvicorn main:app --reload --port 8001
python -m pip install --upgrade pip
python -m pip install --upgrade pip
pip install -r requirements-dev.txt --upgrade --use-pep517 --no-cache-dir
pip check
pip install -r requirements-dev.txt --upgrade --use-pep517 --no-cache-dir
pip check
以上步骤在多平台下使用 venv + pip 提供了最小且稳定的环境搭建方案,满足依赖版本约束与日常开发、测试、迁移和提交质量控制的需要。若需 CI/CD、Docker 或多解释器矩阵(3.10/3.11)扩展,可在此基础上添加对应脚本与配置。
目标:在 Linux 上创建独立的 Python 3.10/3.11 开发环境,安装并验证以下 GPU 加速依赖(CUDA 12.1):
主要步骤:
[Linux Bash]
nvidia-smi
# 确认能看到 GPU,Driver Version >= 530,并且无错误
[Linux Bash]
python3 --version
# 若显示 3.10.x 或 3.11.x,可使用系统 python 创建 venv
以下提供两种虚拟环境方案,任选其一。推荐优先使用 venv(系统已有 Python 3.10/3.11 时),若系统无合适 Python 或遇到二进制兼容问题,可使用 Conda 方案。
mkdir -p ~/proj-pt-cu121 && cd ~/proj-pt-cu121
python3.11 -m venv .venv
source .venv/bin/activate
python -V
pip -V
python -m pip install --upgrade pip setuptools wheel
export TORCH_INDEX_URL="https://download.pytorch.org/whl/cu121"
pip install --index-url "$TORCH_INDEX_URL" \
torch==2.4.0+cu121 torchvision==0.19.0+cu121 torchaudio==2.4.0+cu121
cat > requirements.txt <<'EOF'
lightning~=2.4
transformers~=4.45
accelerate~=0.34
datasets~=3.0
bitsandbytes~=0.43
tqdm~=4.66
EOF
pip install -r requirements.txt
pip freeze --exclude-editable > requirements-lock.txt
accelerate config default
# 可使用默认答案生成 ~/.cache/huggingface/accelerate/default_config.yaml
conda create -n pt-cu121 python=3.11 -y
conda activate pt-cu121
python -V
# 安装 PyTorch 2.4.0 + CUDA 12.1(conda 构建)
conda install -y pytorch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 pytorch-cuda=12.1 -c pytorch -c nvidia
pip install "lightning~=2.4" "transformers~=4.45" "accelerate~=0.34" \
"datasets~=3.0" "bitsandbytes~=0.43" "tqdm~=4.66"
pip freeze --exclude-editable > requirements-lock.txt
python - <<'PY'
import sys, torch
print("Python:", sys.version)
print("Torch:", torch.__version__)
print("Torch CUDA:", torch.version.cuda)
print("CUDA available:", torch.cuda.is_available())
if torch.cuda.is_available():
print("CUDA device count:", torch.cuda.device_count())
print("Current device:", torch.cuda.current_device())
print("Device name:", torch.cuda.get_device_name(0))
# 简单 GPU 张量运算
a = torch.randn((1024, 1024), device="cuda")
b = torch.randn((1024, 1024), device="cuda")
c = a @ b
print("Matmul OK, c.shape =", tuple(c.shape))
PY
期望:
python - <<'PY'
import torchvision, torchaudio, transformers, accelerate, datasets, bitsandbytes, lightning
print("torchvision:", torchvision.__version__)
print("torchaudio:", torchaudio.__version__)
print("transformers:", transformers.__version__)
print("accelerate:", accelerate.__version__)
print("datasets:", datasets.__version__)
print("bitsandbytes:", bitsandbytes.__version__)
print("lightning:", lightning.__version__)
PY
python - <<'PY'
from bitsandbytes.cuda_setup import main_check
main_check() # 打印 bnb 与 CUDA/驱动的检查结果
PY
python - <<'PY'
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
model_id = "sshleifer/tiny-gpt2" # 体积极小的测试模型
tok = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32)
pipe = pipeline("text-generation", model=model, tokenizer=tok, device=0 if torch.cuda.is_available() else -1)
out = pipe("Hello, PyTorch", max_new_tokens=5)
print(out[0]["generated_text"])
PY
python - <<'PY'
from datasets import Dataset
ds = Dataset.from_dict({"text": ["hello", "world", "datasets"], "label": [0,1,0]})
print(ds)
print(ds[:2])
PY
python - <<'PY'
import torch
from torch.utils.data import DataLoader, TensorDataset
import lightning as L
class TinyModule(L.LightningModule):
def __init__(self):
super().__init__()
self.net = torch.nn.Sequential(
torch.nn.Linear(10, 32),
torch.nn.ReLU(),
torch.nn.Linear(32, 2)
)
self.loss = torch.nn.CrossEntropyLoss()
def forward(self, x):
return self.net(x)
def training_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = self.loss(logits, y)
self.log("train_loss", loss)
return loss
def configure_optimizers(self):
return torch.optim.AdamW(self.parameters(), lr=1e-3)
X = torch.randn(128, 10)
y = torch.randint(0, 2, (128,))
dl = DataLoader(TensorDataset(X, y), batch_size=32, shuffle=True)
trainer = L.Trainer(
accelerator="gpu" if torch.cuda.is_available() else "cpu",
devices=1, max_epochs=1, fast_dev_run=True, enable_model_summary=False, logger=False
)
trainer.fit(TinyModule(), dl)
print("Lightning fast_dev_run OK")
PY
accelerate env
pip uninstall -y torch torchvision torchaudio
pip cache purge
pip install --index-url "https://download.pytorch.org/whl/cu121" \
torch==2.4.0+cu121 torchvision==0.19.0+cu121 torchaudio==2.4.0+cu121
# 进入 Conda 方案(见上文方案B),并在同一环境中安装其余依赖
conda create -n pt-cu121 python=3.11 -y
conda activate pt-cu121
conda install -y pytorch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 pytorch-cuda=12.1 -c pytorch -c nvidia
pip install "bitsandbytes~=0.43" "transformers~=4.45" "accelerate~=0.34" "datasets~=3.0" "lightning~=2.4" "tqdm~=4.66"
pip check
# 若有冲突,按报错提示的版本范围进行微调(优先固定较新的兼容小版本)
pip uninstall -y torch torchvision torchaudio
pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0
which pip 指向 .venv/bin/pip 或 conda envmkdir -p ~/hf-cache
export HF_HOME=~/hf-cache
按照以上步骤,您将获得一个可复现、与 CUDA 12.1 匹配的 PyTorch/Lightning/Hugging Face 开发环境,并通过了 GPU 运算、库导入、推理与训练的最小验证。
以最少输入,一步生成可直接执行的Python环境搭建方案,帮助个人与团队快速、稳妥地完成从0到1的环境准备与复现。通过对“依赖清单、目标系统、Python版本、细节程度”的组合识别,按场景自动给出分步指南、验证方法与排错清单,显著缩短环境搭建时间,降低协作中的不一致与隐性风险。适用于新项目初始化、团队统一规范、持续交付流水线、培训与内外部交接等场景,帮助你把“环境问题”从阻碍变成优势,提升交付效率与专业形象。
快速统一团队开发环境,生成标准化搭建文档与依赖清单,新成员入职当天即可拉起项目,减少因环境差异导致的故障与沟通成本。
为实验与竞赛一键复现运行环境,隔离模型依赖,便捷切换库版本做对比实验,并输出可分享的复现实验步骤,保证结果可重复。
在新项目启动或重构时,依据依赖列表生成可执行的环境指引,自动规避包冲突与系统差异,快速完成本地可运行与调试准备。
将模板生成的提示词复制粘贴到您常用的 Chat 应用(如 ChatGPT、Claude 等),即可直接对话使用,无需额外开发。适合个人快速体验和轻量使用场景。
把提示词模板转化为 API,您的程序可任意修改模板参数,通过接口直接调用,轻松实现自动化与批量处理。适合开发者集成与业务系统嵌入。
在 MCP client 中配置对应的 server 地址,让您的 AI 应用自动调用提示词模板。适合高级用户和团队协作,让提示词在不同 AI 工具间无缝衔接。
半价获取高级提示词-优惠即将到期