¥
立即购买

Python异常处理增强专家

0 浏览
0 试用
0 购买
Dec 10, 2025更新

本提示词专门用于为Python代码添加专业的异常处理机制,能够识别代码中的潜在风险点并构建完善的错误处理框架。通过系统化的异常捕获、分类处理和错误信息优化,显著提升代码的健壮性和可维护性。支持多种异常类型处理,包括文件操作、网络请求、数据转换等常见场景,同时提供清晰的错误日志和用户友好的提示信息,帮助开发者快速定位和解决问题。

原始代码分析 识别到的潜在异常风险点与分类:

  • 目录/文件相关(I/O异常)
    • os.listdir(input_dir):FileNotFoundError、NotADirectoryError、PermissionError、OSError
    • open(path, 'r', encoding='utf-8'):FileNotFoundError、PermissionError、UnicodeDecodeError、OSError
    • csv.DictReader(f):csv.Error(CSV格式错误)
    • os.makedirs(os.path.dirname(out_file), exist_ok=True):PermissionError、OSError;且当out_file不含目录时,os.path.dirname(out_file)为空字符串,可能导致问题
    • open(out_file, 'w', encoding='utf-8'):PermissionError、FileNotFoundError、OSError
    • json.dump(payload, f):TypeError(不可序列化)、OSError(写盘失败)
  • 数据处理相关(类型/值异常)
    • float((row['amount'] or '').strip() or '0'):ValueError、TypeError(如含非数字字符或值为非字符串类型)
    • 汇总阶段 total += amt:TypeError(若amt非数值类型)
  • 其他
    • 输入参数不足时默认值已处理,无异常
    • 打印语句基本无风险

整体策略:

  • 保持核心业务逻辑不变(扫描CSV、标准化amount为浮点、按分类汇总、输出JSON)。
  • 分层处理异常:目录级致命错误在main中终止并输出清晰错误;文件级和行级错误尽量跳过问题数据,继续处理其他文件/行。
  • 精准捕获具体异常类型,避免裸 except。
  • 增加日志记录,包含用户友好提示 + 可选堆栈(exc_info)便于定位。

增强后代码

import os
import csv
import json
import sys
import logging
from datetime import datetime

"""
批量扫描目录下的CSV账目文件,合并并输出汇总JSON。
风险点:目录不存在、文件编码/格式异常、字段缺失、数值转换失败、写盘失败。
当前未做任何异常处理和日志记录。
"""

# 基础日志配置:INFO级别,时间+级别+消息;错误时包含堆栈
logging.basicConfig(
    level=logging.INFO,
    format="%(asctime)s %(levelname)s %(message)s"
)


def scan_csvs(input_dir):
    result = []
    try:
        names = os.listdir(input_dir)
    except FileNotFoundError:
        logging.error("输入目录不存在:%s", input_dir, exc_info=True)
        raise
    except NotADirectoryError:
        logging.error("输入路径不是目录:%s", input_dir, exc_info=True)
        raise
    except PermissionError:
        logging.error("没有访问目录权限:%s", input_dir, exc_info=True)
        raise
    except OSError as e:
        logging.error("读取目录失败:%s(%s)", input_dir, e, exc_info=True)
        raise

    for name in names:
        if name.lower().endswith('.csv'):
            path = os.path.join(input_dir, name)
            try:
                # newline='' 以避免CSV换行兼容问题
                with open(path, 'r', encoding='utf-8', newline='') as f:
                    try:
                        reader = csv.DictReader(f)
                        for row in reader:
                            row['source'] = name
                            if 'amount' in row:
                                try:
                                    row['amount'] = float((row['amount'] or '').strip() or '0')
                                except (ValueError, TypeError):
                                    # 非数字或格式异常时,按0处理并告警,但不中断处理
                                    line_no = getattr(reader, "line_num", -1)
                                    logging.warning(
                                        "文件 %s 第 %s 行 amount 解析失败,已置为 0。原值=%r",
                                        name, line_no, row.get('amount'), exc_info=False
                                    )
                                    row['amount'] = 0.0
                            result.append(row)
                    except csv.Error as e:
                        logging.error("解析CSV失败:文件=%s,错误=%s", path, e, exc_info=True)
                        # 跳过该文件,继续处理其他文件
                        continue
            except FileNotFoundError:
                logging.warning("扫描时文件不存在(可能被并发移动/删除):%s", path, exc_info=True)
                continue
            except PermissionError:
                logging.error("没有读取文件权限:%s", path, exc_info=True)
                continue
            except UnicodeDecodeError:
                logging.error("文件编码错误(非UTF-8或损坏):%s", path, exc_info=True)
                continue
            except OSError as e:
                logging.error("读取文件失败:%s(%s)", path, e, exc_info=True)
                continue
    return result


def summarize(rows):
    total = 0.0
    by_cat = {}
    for r in rows:
        amt = r.get('amount', 0.0) or 0.0
        try:
            total += amt
        except TypeError:
            # 双重保护:若上游仍意外留下了非数值,置零并告警
            logging.warning("记录 amount 类型异常,已按0处理:%r", amt, exc_info=False)
            amt = 0.0
            total += 0.0
        cat = r.get('category') or 'unknown'
        by_cat[cat] = by_cat.get(cat, 0.0) + (amt if isinstance(amt, (int, float)) else 0.0)
    return {
        'total': total,
        'by_category': by_cat,
        'count': len(rows)
    }


def main():
    input_dir = sys.argv[1] if len(sys.argv) > 1 else './data'
    out_file = sys.argv[2] if len(sys.argv) > 2 else './output/summary.json'

    try:
        rows = scan_csvs(input_dir)
    except (FileNotFoundError, NotADirectoryError, PermissionError, OSError):
        # 致命错误:输入目录不可用
        logging.error("终止:无法扫描输入目录 %s", input_dir)
        sys.exit(1)

    summary = summarize(rows)
    payload = {
        'generated_at': datetime.utcnow().isoformat(),
        'summary': summary
    }

    # 处理输出目录为空字符串的情况(表示当前目录)
    out_dir = os.path.dirname(out_file) or '.'
    try:
        os.makedirs(out_dir, exist_ok=True)
    except PermissionError:
        logging.error("无法创建输出目录(权限不足):%s", out_dir, exc_info=True)
        sys.exit(2)
    except OSError as e:
        logging.error("创建输出目录失败:%s(%s)", out_dir, e, exc_info=True)
        sys.exit(2)

    try:
        with open(out_file, 'w', encoding='utf-8') as f:
            json.dump(payload, f, ensure_ascii=False, indent=2)
    except PermissionError:
        logging.error("无法写入输出文件(权限不足):%s", out_file, exc_info=True)
        sys.exit(3)
    except FileNotFoundError:
        logging.error("输出文件路径无效:%s", out_file, exc_info=True)
        sys.exit(3)
    except TypeError as e:
        logging.error("JSON 序列化失败:%s", e, exc_info=True)
        sys.exit(3)
    except OSError as e:
        logging.error("写入文件失败:%s(%s)", out_file, e, exc_info=True)
        sys.exit(3)

    print(f"Processed {len(rows)} rows, total={summary['total']}")


if __name__ == '__main__':
    main()

异常处理说明

  • scan_csvs 目录访问
    • os.listdir 的 FileNotFoundError / NotADirectoryError / PermissionError / OSError:记录错误并向上抛出,由 main 统一终止(致命配置问题)。
  • scan_csvs 文件级读取
    • open(...) 的 FileNotFoundError:警告并跳过(文件可能被移动/删除)
    • PermissionError / OSError:记录错误并跳过
    • UnicodeDecodeError:编码非UTF-8或文件损坏,记录错误并跳过该文件
    • csv.DictReader 期间的 csv.Error:CSV格式异常,记录错误并跳过该文件
  • scan_csvs 行级数据处理
    • amount 转换 float 的 ValueError/TypeError:按0.0处理,记录警告并包含行号(便于定位),不中断后续行和文件
  • summarize 汇总阶段
    • total += amt 的 TypeError:作为二次保障,若前面意外留下了非数值类型,则按0处理并告警,避免程序崩溃
  • main 输出阶段
    • 输出目录创建 os.makedirs:PermissionError / OSError -> 记录并以非零码退出
    • 写文件 open/json.dump:PermissionError / FileNotFoundError / TypeError / OSError -> 记录并以非零码退出
  • 日志策略
    • 错误(error)级别:影响流程的异常,包含堆栈(exc_info=True)
    • 警告(warning)级别:可恢复的数据问题(如单行amount解析失败),不包含堆栈以减少噪音
    • 信息(info)级别默认未大量使用,保持输出简洁

使用建议

  • 部署运行
    • 确保输入目录存在且可读,输出目录可写。容器化部署时挂载卷的权限需正确配置。
    • 若希望更详细的调试信息,可在启动时设置环境变量或修改 logging.basicConfig 的 level 为 DEBUG。
  • CSV数据质量
    • 优先保证 CSV 文件的 UTF-8 编码(无BOM)。如来源不一,建议在外部做统一转码。
    • 对经常出现的异常行(warning日志可看到文件与行号),建议修复数据源,减少清洗成本。
  • 监控与告警
    • 可将 logging 输出重定向到文件或接入集中式日志(如ELK/Graylog),并基于 error 日志建立告警。
  • 退出码
    • 1:输入目录不可用
    • 2:创建输出目录失败
    • 3:写输出文件失败
    • 可用于CI/CD或调度系统的失败判定
  • 性能与健壮性
    • 当前逐行处理、异常行降级为0并继续,不会因个别数据问题中断批处理。
    • 如数据规模巨大,建议后续引入流式汇总或分块处理,并保留相同的异常处理策略。

Original Code Analysis Key exception risk points identified:

  • Environment configuration
    • Missing or malformed SERVICE_URL (invalid scheme/host)
    • Missing API_TOKEN (may cause 401/403)
  • HTTP requests
    • No timeout specified, risk of hanging
    • No retries on transient network errors
    • No HTTP status check (HTTPError not handled)
    • Streamed downloads not guarded against network drop
  • JSON parsing
    • resp.json() may raise json.JSONDecodeError
  • File system operations
    • os.makedirs can raise OSError (permissions, disk full)
    • open/write can raise OSError
    • Partial file left on failure
  • Data structure assumptions
    • item['id'] KeyError
    • Response shape not validated (‘items’ missing or wrong type)
  • Control flow robustness
    • Exceptions in fetch_page/download_asset propagate to crash sync_all

To ensure strict, precise exception handling without changing business logic, we add timeouts, targeted retries, explicit status checks, and thorough error handling with clear logs.


Enhanced Code

import os
import json
import time
import logging
from urllib.parse import urlparse
import requests

"""
从服务端分页拉取内容并按需下载附件。
风险点:无超时/重试、HTTP错误未检查、JSON解析失败、磁盘写入失败、环境变量缺失。
"""

# ---- Configuration for robust networking and logging ----
REQUEST_TIMEOUT = (3.05, 15)  # (connect timeout, read timeout)
MAX_RETRIES = 3
BACKOFF_FACTOR = 0.6  # exponential backoff: 0.6, 1.2, 2.4...

# Basic logger setup (stdout/stderr)
logger = logging.getLogger(__name__)
if not logger.handlers:
    handler = logging.StreamHandler()
    formatter = logging.Formatter(
        fmt="%(asctime)s | %(levelname)s | %(name)s | %(message)s"
    )
    handler.setFormatter(formatter)
    logger.addHandler(handler)
logger.setLevel(logging.INFO)

BASE_URL = os.getenv('SERVICE_URL', 'https://api.example.com')
TOKEN = os.getenv('API_TOKEN', '')


def _validate_base_url(url: str) -> None:
    parsed = urlparse(url)
    if not parsed.scheme or not parsed.netloc:
        logger.warning(
            "BASE_URL %r appears invalid (missing scheme or host). "
            "Please set SERVICE_URL like 'https://api.example.com'.",
            url
        )


def _sleep_backoff(attempt: int) -> None:
    # attempt starts at 1
    delay = BACKOFF_FACTOR * (2 ** (attempt - 1))
    time.sleep(delay)


def _request_with_retry(url: str, *, headers=None, stream: bool = False, timeout=REQUEST_TIMEOUT) -> requests.Response:
    """
    GET with strict error handling:
    - Timeout and retry on transient errors (Timeout, ConnectionError)
    - Retry on 5xx, 429, 408; do not retry on other 4xx
    - Raise for non-2xx statuses
    """
    headers = headers or {}
    last_exc = None
    for attempt in range(1, MAX_RETRIES + 1):
        try:
            resp = requests.get(url, headers=headers, stream=stream, timeout=timeout)
            try:
                resp.raise_for_status()
            except requests.exceptions.HTTPError as http_err:
                status = resp.status_code
                # Retry server-side errors and known transient client-side codes
                if status >= 500 or status in (408, 429):
                    logger.warning(
                        "HTTP %s on %s (attempt %d/%d); will retry. Detail: %s",
                        status, url, attempt, MAX_RETRIES, http_err
                    )
                    last_exc = http_err
                else:
                    # Non-retryable client errors
                    logger.error("HTTP %s on %s; not retrying. Detail: %s", status, url, http_err)
                    raise
            else:
                return resp  # success path

        except (requests.exceptions.Timeout, requests.exceptions.ConnectionError) as net_err:
            logger.warning(
                "Network error on %s (attempt %d/%d); will retry. Detail: %s",
                url, attempt, MAX_RETRIES, net_err
            )
            last_exc = net_err

        except requests.exceptions.RequestException as req_err:
            # Other non-retryable request errors (InvalidURL, TooManyRedirects, etc.)
            logger.error("Request error on %s; not retrying. Detail: %s", url, req_err)
            raise

        # Retry if not last attempt
        if attempt < MAX_RETRIES:
            _sleep_backoff(attempt)

    # If we exhausted retries, raise the last exception
    assert last_exc is not None
    raise last_exc


def fetch_page(page=1):
    url = f"{BASE_URL}/v1/items?page={page}"
    headers = {'Authorization': f'Bearer {TOKEN}'} if TOKEN else {}

    # Validate base URL once (non-fatal)
    _validate_base_url(BASE_URL)

    # Perform request with timeout, status check, and retries
    resp = _request_with_retry(url, headers=headers, stream=False, timeout=REQUEST_TIMEOUT)

    # JSON parsing with robust error reporting
    try:
        data = resp.json()
    except json.JSONDecodeError as e:
        # Try to include a small snippet to aid debugging (avoid huge logs)
        snippet = ""
        try:
            snippet = resp.text[:200]
        except Exception:
            pass
        logger.error(
            "Failed to parse JSON for page=%s from %s. Error: %s. Body snippet: %r",
            page, url, e, snippet
        )
        raise
    return data


def download_asset(item, folder='./downloads'):
    url = item.get('asset_url')
    if not url:
        logger.info("Item %r has no asset_url; skipping download.", item.get('id', '<unknown>'))
        return None

    # Ensure item has an 'id' for filename
    item_id = item.get('id')
    if item_id is None:
        logger.warning("Missing 'id' in item; cannot derive filename. Skipping download.")
        return None

    # Prepare folder
    try:
        os.makedirs(folder, exist_ok=True)
    except OSError as e:
        logger.error("Failed to create folder %s: %s", folder, e)
        return None

    filename = os.path.join(folder, f"{item_id}.bin")

    # Download with retry and streaming
    try:
        resp = _request_with_retry(url, headers={}, stream=True, timeout=REQUEST_TIMEOUT)
    except requests.exceptions.RequestException as e:
        logger.error("Failed to start download for item %s from %s: %s", item_id, url, e)
        return None

    # Stream to disk safely; clean up partial files on failure
    try:
        with open(filename, 'wb') as f:
            for chunk in resp.iter_content(chunk_size=8192):
                if chunk:  # filters keep-alive chunks
                    f.write(chunk)
    except requests.exceptions.RequestException as e:
        logger.error("Network error while streaming item %s from %s: %s", item_id, url, e)
        try:
            if os.path.exists(filename):
                os.remove(filename)
        except OSError:
            pass
        return None
    except OSError as e:
        logger.error("File write error for %s: %s", filename, e)
        try:
            if os.path.exists(filename):
                os.remove(filename)
        except OSError:
            pass
        return None
    finally:
        try:
            resp.close()
        except Exception:
            pass

    return filename


def sync_all():
    page = 1
    total = 0
    while True:
        try:
            data = fetch_page(page)
        except requests.exceptions.RequestException as e:
            logger.error("Stopping sync due to request error on page %s: %s", page, e)
            break
        except json.JSONDecodeError as e:
            logger.error("Stopping sync due to JSON parsing error on page %s: %s", page, e)
            break

        if not isinstance(data, dict):
            logger.error("Unexpected response type on page %s: %r", page, type(data))
            break

        items = data.get('items', [])
        if not isinstance(items, list):
            logger.error("Unexpected 'items' type on page %s: %r", page, type(items))
            break

        if not items:
            break

        for it in items:
            if isinstance(it, dict) and it.get('download', False):
                # download_asset handles its own exceptions and logs; no need to catch here
                download_asset(it)
            total += 1

        page += 1
        # Respectful pacing between pages
        time.sleep(0.2)
    print(f'Synced {total} items')


if __name__ == '__main__':
    sync_all()

Exception Handling Rationale

  • Centralized request handling (_request_with_retry)
    • Adds connect/read timeouts to prevent hanging.
    • Uses requests.Response.raise_for_status to surface HTTP errors explicitly.
    • Retries only on transient conditions:
      • Network-level issues: Timeout, ConnectionError.
      • Server-side 5xx and transient 4xx (408, 429).
    • Non-retryable 4xx (e.g., 400, 401, 403, 404) are logged and raised immediately to avoid unintended loops.
    • Exponential backoff reduces server load and avoids thundering herd.
  • fetch_page
    • Validates BASE_URL format once and warns if malformed (non-fatal).
    • Delegates HTTP robustness to _request_with_retry.
    • Catches json.JSONDecodeError, logs a short response snippet for debugging, then re-raises to stop sync gracefully.
  • download_asset
    • Verifies required fields (asset_url and id) and logs clearly when missing.
    • Protects os.makedirs against OSError (permissions, missing parents, disk/quota issues).
    • Wraps streamed download:
      • Starts with safe request via _request_with_retry.
      • Handles streaming network errors (requests.exceptions.RequestException) distinctly from file I/O errors (OSError).
      • Cleans up partial files on failure to avoid corrupt artifacts.
      • Ensures Response is closed in a finally block to prevent resource leaks.
  • sync_all
    • Catches and logs request/JSON failures per page and stops the process gracefully, preserving a clear exit state instead of crashing.
    • Validates response/data shapes (dict for data and list for items) before use to guard against schema changes.
    • Leaves per-item download exceptions to be handled within download_asset, keeping the main loop focused on control flow.
  • Logging
    • Provides clear, user-friendly messages with page numbers, item IDs, URLs where helpful.
    • Avoids logging sensitive tokens or full large payloads (limits body snippet to 200 chars).
    • Uses log levels appropriately: info for normal skips, warning for transient issues, error for terminal failures.

Usage Suggestions

  • Configuration
    • Set SERVICE_URL to a full URL with scheme and host, e.g., https://api.example.com.
    • Set API_TOKEN if the API requires authorization; the code will handle empty tokens but may receive 401/403.
    • Adjust REQUEST_TIMEOUT, MAX_RETRIES, and BACKOFF_FACTOR for your environment and API rate limits.
  • Logging and observability
    • Integrate with your application logger or set LOGLEVEL via environment. For more detail during debugging: logger.setLevel(logging.DEBUG).
    • Consider aggregating logs via structured logging (JSON) in production.
  • Operational concerns
    • Test failure scenarios: invalid URL, network outage, 500s, 429 throttling, malformed JSON, disk full, permission errors.
    • Ensure the downloads directory has sufficient space and appropriate permissions.
    • If id collisions across pages are possible, ensure naming strategy suffices (e.g., include page or timestamp).
    • If the API enforces strict rate limits, consider increasing the inter-page sleep or adding a retry-after handler (using response.headers.get('Retry-After')).
  • Safety
    • Do not print or log API_TOKEN.
    • For extremely large files, you may wish to verify content-length, enforce a maximum download size, or compute checksums after download.

原始代码分析 识别到的主要异常风险点与类别:

  • 文件与I/O
    • 读取输入文件:FileNotFoundError、PermissionError、OSError
    • 写入输出文件:FileNotFoundError(目录不存在)、PermissionError、OSError(磁盘满/设备故障)
  • JSON解析与结构
    • json.load:json.JSONDecodeError(格式错误)
    • JSON结构/字段缺失或类型错误:KeyError、TypeError(data不是字典或缺少values)
  • 类型转换
    • 将values中元素转为float:ValueError、TypeError
    • 命令行窗口参数window转换为int:ValueError
  • 计算异常
    • normalize:空序列 statistics.StatisticsError(mean/pstdev)、标准差为0导致除零 ZeroDivisionError
    • moving_avg:window <= 0 导致除零或逻辑错误
    • 结果中的平均值 sum(series)/len(series):空序列导致 ZeroDivisionError(通过前置校验避免)

为保证业务逻辑不变,本次增强仅增加显式校验与精准捕获,异常时给出清晰提示并优雅退出,不改变正常情况下的输出结构与计算流程。


增强后代码

import json
import sys
import statistics
from pathlib import Path
from json import JSONDecodeError

"""
读取JSON中的数值序列,计算均值、标准化与移动平均并输出。
风险点:文件不存在、JSON格式错误、空序列导致除零、窗口参数不合法、类型转换失败。
"""

def load_numbers(path):
    try:
        with open(path, 'r', encoding='utf-8') as f:
            data = json.load(f)
    except FileNotFoundError as e:
        raise FileNotFoundError(f"输入文件不存在: {path}") from e
    except PermissionError as e:
        raise PermissionError(f"无权限读取文件: {path}") from e
    except JSONDecodeError as e:
        raise ValueError(f"JSON解析失败: {path} (行{e.lineno} 列{e.colno}): {e.msg}") from e
    except OSError as e:
        raise OSError(f"读取文件失败: {path}: {e.strerror}") from e

    if not isinstance(data, dict) or 'values' not in data:
        raise ValueError("JSON中缺少'values'字段或格式不正确(应为顶层字典且包含'values'键)")

    raw = data['values']
    if not isinstance(raw, (list, tuple)):
        raise ValueError(f"'values'应为数组(list/tuple),实际为: {type(raw).__name__}")

    series = []
    for idx, x in enumerate(raw):
        try:
            series.append(float(x))
        except (ValueError, TypeError) as e:
            raise ValueError(f"第{idx}个元素无法转换为浮点数: {x!r}") from e
    return series


def normalize(series):
    if not isinstance(series, (list, tuple)):
        raise TypeError("normalize(series) 参数必须为list或tuple")
    if len(series) == 0:
        raise ValueError("空序列无法计算均值/标准差")
    mean = statistics.mean(series)
    stdev = statistics.pstdev(series)
    if stdev == 0:
        raise ZeroDivisionError("序列标准差为0,无法进行标准化(所有值相同)")
    return [(x - mean) / stdev for x in series]


def moving_avg(series, window):
    if not isinstance(window, int):
        raise TypeError(f"窗口参数必须为整数,实际为: {type(window).__name__}")
    if window <= 0:
        raise ValueError(f"窗口参数必须为正整数,当前: {window}")
    out = []
    for i in range(len(series)):
        if i + 1 < window:
            out.append(None)
        else:
            seg = series[i + 1 - window:i + 1]
            out.append(sum(seg) / window)
    return out


def save_result(path, payload):
    try:
        with open(path, 'w', encoding='utf-8') as f:
            json.dump(payload, f, ensure_ascii=False, indent=2)
    except FileNotFoundError as e:
        raise FileNotFoundError(f"输出目录不存在或路径无效: {path}") from e
    except PermissionError as e:
        raise PermissionError(f"无权限写入文件: {path}") from e
    except TypeError as e:
        # 通常payload为基础类型可序列化,此处为防御性处理
        raise TypeError(f"结果数据无法序列化为JSON: {e}") from e
    except OSError as e:
        raise OSError(f"写入文件失败: {path}: {e.strerror}") from e


def main():
    try:
        in_path = Path(sys.argv[1] if len(sys.argv) > 1 else 'input.json')
        out_path = Path(sys.argv[2] if len(sys.argv) > 2 else 'result.json')

        if len(sys.argv) > 3:
            try:
                window = int(sys.argv[3])
            except ValueError as e:
                print(f"错误: 无效的窗口参数 '{sys.argv[3]}',必须为正整数。", file=sys.stderr)
                sys.exit(2)
        else:
            window = 5

        series = load_numbers(in_path)
        norm = normalize(series)
        ma = moving_avg(series, window)

        result = {
            'count': len(series),
            'mean': sum(series) / len(series),
            'normalized': norm,
            'moving_avg': ma
        }
        save_result(out_path, result)
        print(f'Wrote {out_path} with {len(series)} values')

    except (FileNotFoundError, PermissionError, OSError) as e:
        print(f"文件/系统错误: {e}", file=sys.stderr)
        sys.exit(1)
    except (ValueError, TypeError, ZeroDivisionError) as e:
        print(f"数据/参数错误: {e}", file=sys.stderr)
        sys.exit(1)


if __name__ == '__main__':
    main()

异常处理说明

  • load_numbers
    • 文件读取
      • FileNotFoundError/PermissionError/OSError:对具体I/O错误分别给出路径与系统错误信息,便于定位权限、路径或设备问题。
    • JSON解析
      • JSONDecodeError:包含文件路径、行列号与原始解析信息,快速定位格式错误。
    • JSON结构校验
      • 非字典或缺少values:抛出ValueError,提示预期结构。
      • values类型非数组:抛出ValueError,提示实际类型。
    • 元素转换
      • float转换失败(ValueError/TypeError):标明出错索引与原始值,便于清洗数据。
  • normalize
    • 参数类型:TypeError,防止误传类型。
    • 空序列:ValueError,避免statistics.StatisticsError并给出更友好的错误语义。
    • 标准差为0:显式抛出ZeroDivisionError,提示“所有值相同”的业务语义,避免无意义标准化结果。
  • moving_avg
    • 窗口参数类型与取值:TypeError/ValueError,防止window<=0导致除零或逻辑错误。
  • save_result
    • 输出路径不存在/无权限/其他OSError:提供路径与系统错误信息,便于权限或目录配置排查。
    • 序列化TypeError:防御性处理,出现非JSON可序列化对象时给出明确提示。
  • main
    • 窗口参数转换:单独捕获ValueError并以退出码2提示参数问题。
    • 其余异常归类
      • 文件/系统错误:FileNotFoundError、PermissionError、OSError
      • 数据/参数错误:ValueError、TypeError、ZeroDivisionError
    • 统一向stderr输出简洁且可定位的信息,并以非零码退出,保证命令行/调度器可感知失败。

使用建议

  • 输入文件契约
    • JSON顶层必须为对象,包含键"values",其值为数字数组或可转换为浮点的字符串数组。
    • 若数据可能包含非法元素,建议在写入input.json前做预清洗。
  • 运行与退出码
    • 正常完成:退出码0
    • 参数错误(窗口值非法):退出码2
    • 其他错误(文件、数据、系统):退出码1
  • 日志与监控
    • 当前以stderr输出错误信息,便于与stdout分离。生产环境建议接入标准logging,并将异常堆栈(exc_info=True)记录到文件或集中式日志系统,同时保持对终端用户的友好提示。
  • 健壮性测试
    • 用例覆盖:不存在的输入文件、无权限文件、损坏的JSON、缺少values、values为空、values含非数字、window非整数或<=0、输出目录不存在或无权限、所有值相同(标准差为0)。
  • 性能与安全
    • 未引入宽泛捕获或高成本处理,异常链保留原始上下文(raise ... from e),便于调试且不泄露敏感信息。
    • 对数据与参数的前置校验可减少后续计算中的不确定性与异常传播。

示例详情

解决的问题

用于在代码评审、上线前走查与遗留系统加固等关键环节,一键为Python代码补齐专业级异常处理。通过系统识别风险点、分级处置与信息优化,显著降低线上故障与返工成本,缩短排障时间,提升用户可感知的稳定性与体验。输出包含风险清单、加固后的代码、处理思路与使用建议,形成从发现到修复的闭环,且不改变核心业务逻辑。适配数据处理、Web后端、自动化脚本等多种场景,帮助团队沉淀统一规范,快速复制最佳实践。

适用用户

后端Python开发工程师

在发布前一键补强服务与任务的异常处理,规范错误提示与日志,减少线上故障与人工排查时间。

数据工程师/分析师

为ETL与数据清洗脚本自动添加分类捕获与降级策略,遇到脏数据不中断流程,生成可追踪的错误记录。

自动化与脚本开发者

批量处理文件、网络下载、目录遍历等操作时,自动生成稳健的错误处理与重试建议,避免任务半途而废。

特征总结

全面扫描代码风险点,自动标注易出错操作,一键补充精准异常捕获与处理方案。
根据业务场景区分文件、网络、数据转换等异常,自动匹配对应策略,避免误拦截与遗漏。
生成清晰可读的错误提示与日志语句,兼顾用户友好和调试信息,定位问题更省时。
保留原有业务逻辑不动,智能插入异常处理结构,保证流程顺畅且覆盖关键失败点。
提供分级处理与回退建议,遇到不可恢复错误自动给出替代路径或温和降级方案。
一键生成增强版代码与说明文档,阐明关键处理思路与预期效果,便于评审与培训。
支持按风险等级调整严格程度,自定义提示语风格与日志深度,满足开发到上线不同阶段。
自动优化代码结构与重复逻辑,减少冗余捕获,提升可读性与后续维护效率。
预置常见场景模板,文件读写、网络请求、数据清洗一键套用,新手也能快速上手改造。
提供部署与调试建议,帮助制定错误告警规则与回溯路径,缩短线上排障时间。

如何使用购买的提示词模板

1. 直接在外部 Chat 应用中使用

将模板生成的提示词复制粘贴到您常用的 Chat 应用(如 ChatGPT、Claude 等),即可直接对话使用,无需额外开发。适合个人快速体验和轻量使用场景。

2. 发布为 API 接口调用

把提示词模板转化为 API,您的程序可任意修改模板参数,通过接口直接调用,轻松实现自动化与批量处理。适合开发者集成与业务系统嵌入。

3. 在 MCP Client 中配置使用

在 MCP client 中配置对应的 server 地址,让您的 AI 应用自动调用提示词模板。适合高级用户和团队协作,让提示词在不同 AI 工具间无缝衔接。

AI 提示词价格
¥20.00元
先用后买,用好了再付款,超安全!

您购买后可以获得什么

获得完整提示词模板
- 共 627 tokens
- 4 个可调节参数
{ Python代码 } { 输出语言 } { 代码用途 } { 异常处理级别 }
获得社区贡献内容的使用权
- 精选社区优质案例,助您快速上手提示词
使用提示词兑换券,低至 ¥ 9.9
了解兑换券 →
限时半价

不要错过!

半价获取高级提示词-优惠即将到期

17
:
23
小时
:
59
分钟
:
59