¥
立即购买

数据调试助手

426 浏览
40 试用
11 购买
Dec 1, 2025更新

本提示词为数据处理领域的调试专家设计,能够帮助用户系统识别数据相关编程错误,提供分步骤调试指南、可执行代码示例和测试验证方法。输出内容结构清晰、操作性强,并结合参与式反馈(如等级、积分),帮助用户快速定位问题并提升解决能力。

  1. 错误识别 🧭
  • 行动描述
    • 列出触发点与可能原因:分隔符混用导致列数不一致;多种日期格式与时区导致解析失败;user_id被推断为整数导致前导零丢失;amount含币种与欧式小数;文件内混杂UTF-8与GBK编码。
  • 适用代码片段
  • 预期结果
    • 明确五类错误来源:分隔符、日期、多编码、ID类型、金额格式。
  1. 调试步骤 🔍
  • 行动描述
    • 依序执行:编码嗅探与统一;分隔符自动检测与统一;惰性读取与坏行隔离;多格式日期解析与本地化;数值规范化与币种剥离;ID强制字符串;重复检测与去重。
  • 适用代码片段
    • 见第3步的完整脚本。
  • 预期结果
    • 生成统一UTF-8、统一逗号分隔的CSV;隔离坏行到 quarantine.csv;产出规范化DataFrame。
  1. 解决方案实施 🛠️
  • 行动描述
    • 使用 pandas + python csv + dateutil。包含容错读、隔离坏行、字段规范化。请先设置配置项。
  • 适用代码片段
    • 可执行脚本(保存为 clean_events.py 或在Notebook运行):
import os, io, csv, re, random
from decimal import Decimal, InvalidOperation
import numpy as np
import pandas as pd
from dateutil import parser, tz

# ===== 配置 =====
CONFIG = {
    "expected_cols": 7,                 # 头部列数;可留None自动取header
    "local_tz": "Asia/Shanghai",        # 本地时区
    "user_id_col": "user_id",           # 如果列名不同,请修改
    "timestamp_col": None,              # None时自动检测
    "amount_col": None,                 # None时自动检测
    "dedupe_subset": None               # 例如 ["user_id", "timestamp_utc"];None则全列去重
}

# ===== 1) 编码嗅探与统一(逐行解码,UTF-8优先,失败退GBK;仍失败->隔离) =====
def unify_encoding(src_path, utf8_path, quarantine_path):
    total, good, bad = 0, 0, 0
    with open(src_path, "rb") as f_in, \
         open(utf8_path, "w", encoding="utf-8", newline="") as f_out, \
         open(quarantine_path, "w", encoding="utf-8", newline="") as f_bad:
        for raw in f_in:
            total += 1
            for enc in ("utf-8-sig", "gbk"):
                try:
                    line = raw.decode(enc)
                    # 去除BOM
                    line = line.replace("\ufeff", "")
                    f_out.write(line)
                    good += 1
                    break
                except UnicodeDecodeError:
                    continue
            else:
                # 两种编码都失败,隔离
                try:
                    # 尝试二进制转义保留信息
                    safe = raw.decode("utf-8", errors="replace")
                except Exception:
                    safe = str(raw)
                f_bad.write(safe)
                bad += 1
    return {"total_lines": total, "encoded_ok": good, "encoded_bad": bad}

# ===== 2) 分隔符统一与坏行隔离(逗号/分号自动切换;统一写成逗号) =====
def parse_with_delim(line, delim):
    buf = io.StringIO(line)
    reader = csv.reader(buf, delimiter=delim, quotechar='"', escapechar="\\")
    return next(reader)

def unify_delimiters(utf8_path, unified_path, quarantine_path, expected_cols=None):
    with open(utf8_path, "r", encoding="utf-8", newline="") as f_in, \
         open(unified_path, "w", encoding="utf-8", newline="") as f_out, \
         open(quarantine_path, "a", encoding="utf-8", newline="") as f_bad:  # 追加写入
        writer = csv.writer(f_out, delimiter=",", quotechar='"', escapechar="\\")
        # 读header,确定主分隔符和列数
        header_line = f_in.readline()
        if not header_line:
            return {"rows_ok": 0, "rows_bad": 0, "expected_cols": 0}
        try:
            h_comma = parse_with_delim(header_line, ",")
        except Exception:
            h_comma = []
        try:
            h_semi = parse_with_delim(header_line, ";")
        except Exception:
            h_semi = []
        header = h_comma if len(h_comma) >= len(h_semi) else h_semi
        main_delim = "," if header is h_comma else ";"
        exp_cols = expected_cols or len(header)
        writer.writerow(header)
        rows_ok, rows_bad = 0, 0
        # 逐行处理
        for line in f_in:
            row = None
            for delim in (main_delim, ";" if main_delim == "," else ","):
                try:
                    fields = parse_with_delim(line, delim)
                    if len(fields) == exp_cols:
                        row = fields
                        break
                except Exception:
                    continue
            if row is None:
                f_bad.write(line)
                rows_bad += 1
                continue
            writer.writerow(row)
            rows_ok += 1
    return {"rows_ok": rows_ok, "rows_bad": rows_bad, "expected_cols": exp_cols}

# ===== 3) 自动检测候选列 =====
def detect_datetime_column(df):
    candidates = [c for c in df.columns if re.search(r"(time|date)", c, flags=re.I)]
    if not candidates:
        candidates = list(df.select_dtypes(include=["object", "string"]).columns)
    best_col, best_ratio = None, 0.0
    for c in candidates:
        s = df[c].dropna().astype(str).head(200)
        ok = 0
        for v in s:
            try:
                parser.parse(v, dayfirst=True, fuzzy=True)
                ok += 1
            except Exception:
                pass
        ratio = ok / max(1, len(s))
        if ratio > best_ratio:
            best_ratio, best_col = ratio, c
    return best_col if best_ratio >= 0.6 else None

def detect_amount_column(df):
    name_hits = [c for c in df.columns if re.search(r"(amount|price|total)", c, flags=re.I)]
    if name_hits:
        return name_hits[0]
    # 回退:找含货币符号或小数的大概率金额列
    for c in df.select_dtypes(include=["object", "string"]).columns:
        s = df[c].dropna().astype(str).head(50)
        hit = sum(bool(re.search(r"[$€£¥]|[A-Z]{3}|^\d+[.,]\d{2}$", v)) for v in s)
        if hit / max(1, len(s)) >= 0.4:
            return c
    return None

# ===== 4) 日期解析(统一UTC,输出为带时区的datetime64[ns, UTC]) =====
LOCAL_TZ_CACHE = {}
def get_local_tz(name):
    if name not in LOCAL_TZ_CACHE:
        LOCAL_TZ_CACHE[name] = tz.gettz(name)
    return LOCAL_TZ_CACHE[name]

def parse_datetime_utc(val, local_tz_name):
    if pd.isna(val):
        return pd.NaT
    text = str(val)
    try:
        dt = parser.parse(text, dayfirst=True, fuzzy=True)
    except Exception:
        return pd.NaT
    # 赋默认本地时区,再转UTC
    local_zone = get_local_tz(local_tz_name)
    if dt.tzinfo is None:
        dt = dt.replace(tzinfo=local_zone)
    return pd.Timestamp(dt).tz_convert("UTC")

# ===== 5) 金额规范化与币种剥离 =====
SYM_TO_CODE = {"$": "USD", "€": "EUR", "£": "GBP", "¥": "CNY"}
EURO_PATTERN = re.compile(r"^\d{1,3}(\.\d{3})*,\d{2}$")
def normalize_amount_cell(val):
    if pd.isna(val):
        return (np.nan, None)
    s = str(val).strip()
    # 抽取币种
    m_code = re.search(r"(?i)\b([A-Z]{3})\b", s)
    currency = m_code.group(1).upper() if m_code else None
    m_sym = re.search(r"[$€£¥]", s)
    if currency is None and m_sym:
        currency = SYM_TO_CODE.get(m_sym.group(0))
    # 去除币种与符号
    s = re.sub(r"(?i)\b[A-Z]{3}\b", "", s)
    s = re.sub(r"[$€£¥]", "", s).strip()
    # 欧式小数
    if EURO_PATTERN.fullmatch(s):
        num_str = s.replace(".", "").replace(",", ".")
    else:
        num_str = s.replace(",", "").replace(" ", "")
    num_str = re.sub(r"[^\d\.\-]", "", num_str)
    if num_str == "":
        return (np.nan, currency)
    try:
        dec = Decimal(num_str)
    except InvalidOperation:
        return (np.nan, currency)
    return (dec, currency)

# ===== 6) 主清洗流程 =====
def clean_events_csv(source_csv, output_clean_csv, quarantine_csv, config=CONFIG):
    tmp_utf8 = source_csv + ".utf8.tmp"
    tmp_unified = source_csv + ".unified.tmp"

    stats_enc = unify_encoding(source_csv, tmp_utf8, quarantine_csv)
    stats_sep = unify_delimiters(tmp_utf8, tmp_unified, quarantine_csv, expected_cols=config.get("expected_cols"))

    # pandas惰性读取可用chunksize,但这里先一次读入(文件大时可改为chunksize处理)
    df = pd.read_csv(
        tmp_unified,
        engine="python",
        dtype={config["user_id_col"]: "string"},
        keep_default_na=False,
        na_values=["N/A", "-", ""],
    )

    # user_id 强制字符串 & 去空白
    if config["user_id_col"] in df.columns:
        df[config["user_id_col"]] = df[config["user_id_col"]].astype("string").str.strip()

    # 自动检测列名
    ts_col = config.get("timestamp_col") or detect_datetime_column(df)
    amt_col = config.get("amount_col") or detect_amount_column(df)

    # 日期解析 -> UTC
    parse_ok = 0
    if ts_col:
        df["timestamp_utc"] = df[ts_col].map(lambda v: parse_datetime_utc(v, config["local_tz"]))
        parse_ok = df["timestamp_utc"].notna().sum()
    else:
        df["timestamp_utc"] = pd.NaT

    # 金额规范化与币种剥离
    amt_ok = 0
    if amt_col:
        pairs = df[amt_col].map(normalize_amount_cell)
        df["amount_dec"] = pairs.map(lambda x: x[0])
        df["currency"] = pairs.map(lambda x: x[1])
        # 转float避免后续计算问题;保留原Decimal列以检查
        df["amount"] = df["amount_dec"].astype("float64")
        amt_ok = df["amount"].notna().sum()

    # 去重策略
    before = len(df)
    if config.get("dedupe_subset"):
        df_clean = df.drop_duplicates(subset=config["dedupe_subset"], keep="first")
    else:
        df_clean = df.drop_duplicates(keep="first")
    after = len(df_clean)

    # 输出
    df_clean.to_csv(output_clean_csv, index=False)

    # 清理临时文件
    for p in (tmp_utf8, tmp_unified):
        try:
            os.remove(p)
        except Exception:
            pass

    metrics = {
        "total_lines": stats_enc["total_lines"],
        "encoded_ok": stats_enc["encoded_ok"],
        "encoded_bad": stats_enc["encoded_bad"],
        "rows_ok": stats_sep["rows_ok"],
        "rows_bad": stats_sep["rows_bad"],
        "expected_cols": stats_sep["expected_cols"],
        "parse_time_ok": parse_ok,
        "parse_amount_ok": amt_ok,
        "rows_before": before,
        "rows_after": after,
    }
    return df_clean, metrics

# ===== 7) 验证与打分 =====
def validate(df, metrics, config=CONFIG, sample_n=20, min_year=2000, max_year=2035, max_abs_amount=1e9):
    report = {}

    # 读入成功率
    read_rate = metrics["rows_ok"] / max(1, metrics["rows_ok"] + metrics["rows_bad"])
    report["read_rate"] = read_rate

    # 日期解析成功率
    if "timestamp_utc" in df.columns:
        time_rate = df["timestamp_utc"].notna().mean()
        report["time_parse_rate"] = time_rate
        # 日期范围约束
        dt_min = pd.Timestamp(f"{min_year}-01-01", tz="UTC")
        dt_max = pd.Timestamp(f"{max_year}-12-31", tz="UTC")
        in_range = df["timestamp_utc"].dropna().between(dt_min, dt_max).mean() if df["timestamp_utc"].notna().any() else np.nan
        report["time_in_range_rate"] = in_range

    # 金额约束
    if "amount" in df.columns:
        amt_valid = df["amount"].dropna().apply(lambda x: abs(x) <= max_abs_amount).mean() if df["amount"].notna().any() else np.nan
        report["amount_valid_rate"] = amt_valid

    # 重复率统计
    dupe_dropped = metrics["rows_before"] - metrics["rows_after"]
    dupe_rate = dupe_dropped / max(1, metrics["rows_before"])
    report["dedupe_drop_rate"] = dupe_rate

    # 随机抽样检查
    sample = df.sample(min(sample_n, len(df)), random_state=42) if len(df) else df
    sample_ok = {
        "timestamp_non_null": int(sample["timestamp_utc"].notna().sum()) if "timestamp_utc" in df.columns else 0,
        "amount_non_null": int(sample["amount"].notna().sum()) if "amount" in df.columns else 0,
    }
    report["sample_ok"] = sample_ok

    # 简单断言(不抛错,只记录)
    report["assertions"] = {
        "no_int_user_id": df[CONFIG["user_id_col"]].dtype.name == "string",
    }

    # 交互式打分
    # 成功读入率、日期解析成功率、去重消除率各占权重
    score = 0
    score += int(round(read_rate * 4))           # 0-4分
    score += int(round(report.get("time_parse_rate", 0) * 3))  # 0-3分
    score += int(round(dupe_rate * 3))           # 0-3分
    report["score"] = score
    return report

if __name__ == "__main__":
    # 示例调用:修改路径与配置后运行
    src = "user_events.csv"
    out = "user_events_clean.csv"
    quarantine = "quarantine.csv"
    CONFIG.update({
        "expected_cols": 7,
        "local_tz": "Asia/Shanghai",
        "user_id_col": "user_id",
        "timestamp_col": None,     # 自动检测
        "amount_col": None,        # 自动检测
        "dedupe_subset": None      # 或例如 ["user_id", "timestamp_utc"]
    })
    df_clean, metrics = clean_events_csv(src, out, quarantine, CONFIG)
    report = validate(df_clean, metrics, CONFIG)
    print("Metrics:", metrics)
    print("Report:", report)
  • 预期结果
    • 读入阶段不再触发 ParserError;坏行写入 quarantine.csv。
    • 日期统一为 UTC 带时区列 timestamp_utc,兼容格式:2024/11/02 14:23、02-11-2024 2:23 PM、2024-11-02T14:23:01Z。
    • user_id 保持字符串类型,前导零不丢失。
    • amount 产出数值列 amount 与币种列 currency,兼容“1.234,56”“USD 23.50”。
    • 输出 user_events_clean.csv 可复现。
  1. 测试与验证 ✅
  • 行动描述
    • 运行内置 validate,执行随机抽样校验、日期范围与金额约束断言、重复率统计;打印评分。
  • 适用代码片段
    • 已在脚本的 validate 中实现;也可在Notebook中手动执行:
df_clean, metrics = clean_events_csv("user_events.csv", "user_events_clean.csv", "quarantine.csv", CONFIG)
report = validate(df_clean, metrics, CONFIG)
print(f"成功读入率: {report['read_rate']:.2%}")
print(f"日期解析成功率: {report.get('time_parse_rate', 0):.2%}")
print(f"日期范围命中率: {report.get('time_in_range_rate', np.nan)}")
print(f"金额有效率: {report.get('amount_valid_rate', np.nan)}")
print(f"去重消除率: {report['dedupe_drop_rate']:.2%}")
print(f"得分: {report['score']}/10")
  • 预期结果
    • 控制台显示指标与打分;可用于持续调试与回归验证。

加分建议 🎯

  • 如果文件很大:把 pandas 读入改为 chunksize=100_000,按块应用解析与规范化,再拼接或直接写出。
  • 如果日期列确定:设置 CONFIG["timestamp_col"] 避免自动检测误判。
  • 若币种需保留映射或做汇率换算:保留 currency 列,另行处理。

互动打分与积分规则 🕹️

  • 成功读入率≥95%:4分;85–95%:3分;70–85%:2分;<70%:1分
  • 日期解析成功率≥90%:3分;75–90%:2分;50–75%:1分;<50%:0分
  • 去重消除率≥20%:3分;10–20%:2分;1–10%:1分;<1%:0分

任务完成!获得10分。 当前等级:初级调试员 下一等级:代码修复者(距离20分)

1️⃣ 错误识别 🔎

  • 行动描述:界定四类问题与成因
  • 适用代码片段: -- 典型非聚合列错误 SELECT order_date, SUM(total) FROM orders GROUP BY DATE_TRUNC('month', order_date); -- 报错:ERROR: column "order_date" must appear in the GROUP BY clause or be used in an aggregate function
  • 预期结果:
    • 非聚合列导致报错:SELECT 中出现 order_date,但 GROUP BY 未包含它
    • 非法日期字符串(如 '2024/11/31')在 CAST/TO_TIMESTAMP 时失败
    • 退款 JOIN 导致重复计数(多行匹配),净额被放大
    • 订单为 UTC,退款为本地时区,月边界错位

2️⃣ 调试步骤 🧭 A. 枚举并隔离非法日期

  • 行动描述:识别不可解析的日期字符串,避免在清洗前参与计算
  • 适用代码片段: SELECT o.id, o.order_date::text AS raw_date FROM orders o WHERE NOT (o.order_date::text ~ '^\d{4}[-/]\d{2}[-/]\d{2}(\s\d{2}:\d{2}(:\d{2})?)?$');
  • 预期结果:列出格式非法的日期文本;为后续过滤准备数据

B. 检查跨时区月边界

  • 行动描述:在 UTC 与本地时区上计算同一时间的月度,观察边界错位
  • 适用代码片段(PostgreSQL): SELECT r.refund_date::text AS raw, DATE_TRUNC('month', (TO_TIMESTAMP(regexp_replace(r.refund_date::text,'/','-'),'YYYY-MM-DD HH24:MI:SS') AT TIME ZONE 'Asia/Shanghai')) AS month_local, DATE_TRUNC('month', (TO_TIMESTAMP(regexp_replace(r.refund_date::text,'/','-'),'YYYY-MM-DD HH24:MI:SS') AT TIME ZONE 'Asia/Shanghai')) AT TIME ZONE 'UTC' AS month_utc FROM refunds r LIMIT 10;
  • 预期结果:看到本地与 UTC 月度不一致的样例,确认需统一到 UTC

C. 识别退款重复

  • 行动描述:定位 order_id + refund_date + amount 的重复
  • 适用代码片段: SELECT order_id, refund_date, amount, COUNT() AS dup_cnt FROM refunds GROUP BY order_id, refund_date, amount HAVING COUNT() > 1;
  • 预期结果:输出重复键集合与计数;为去重提供依据

D. 校验聚合维度

  • 行动描述:确保仅用 DATE_TRUNC('month', ...) 作为维度
  • 适用代码片段: SELECT DATE_TRUNC('month', order_ts)::date AS month_utc, SUM(total) FROM some_view GROUP BY DATE_TRUNC('month', order_ts);
  • 预期结果:不再引用原始日期列,避免 GROUP BY 报错

3️⃣ 解决方案实施 🛠 A. CTE 预清洗:日期解析与时区归一(统一到 UTC),并去重

  • 行动描述:用 CTE 管道清洗 orders/refunds 的日期文本,校验合法性,构造 UTC 时间戳,标记并剔除非法值;用窗口函数反连接去重;用半连接避免重复扩散
  • 适用代码片段(PostgreSQL,修改 'Asia/Shanghai' 为你的本地时区):

WITH params AS ( SELECT 'Asia/Shanghai'::text AS local_tz ),

-- 1) 订单清洗(假设 order_date 为 text;若已为 timestamp,简化为 AT TIME ZONE 'UTC') orders_clean AS ( SELECT o.id, o.user_id, o.total::numeric AS total, o.currency, -- 规范化分隔符 regexp_replace(trim(o.order_date::text), '[./]', '-', 'g') AS s FROM orders o ), orders_valid AS ( SELECT id, user_id, total, currency, -- 基于解析后的文本构造 UTC timestamptz(订单时间已为 UTC 语义) make_timestamptz( split_part(s,'-',1)::int, split_part(s,'-',2)::int, split_part(split_part(s,' ',1),'-',3)::int, split_part(coalesce(split_part(s,' ',2),'00:00:00'),':',1)::int, split_part(coalesce(split_part(s,' ',2),'00:00:00'),':',2)::int, split_part(coalesce(split_part(s,' ',2),'00:00:00'),':',3)::int, 'UTC' ) AS order_ts_utc FROM orders_clean WHERE s ~ '^\d{4}-\d{2}-\d{2}(\s\d{2}:\d{2}(:\d{2})?)?$' AND split_part(s,'-',2)::int BETWEEN 1 AND 12 AND split_part(split_part(s,' ',1),'-',3)::int BETWEEN 1 AND EXTRACT(day FROM (date_trunc('month', make_date( split_part(s,'-',1)::int, split_part(s,'-',2)::int, 1 )) + interval '1 month - 1 day')) ), orders_dedup AS ( -- 以主键去重;若存在重复 id,选最新时间 SELECT DISTINCT ON (id) id, user_id, total, currency, order_ts_utc FROM orders_valid ORDER BY id, order_ts_utc DESC ),

-- 2) 退款清洗(本地时区解释,再转 UTC) refunds_clean AS ( SELECT r.order_id, r.amount::numeric AS amount, regexp_replace(trim(r.refund_date::text), '[./]', '-', 'g') AS s, (SELECT local_tz FROM params) AS local_tz FROM refunds r ), refunds_valid AS ( SELECT order_id, amount, make_timestamptz( split_part(s,'-',1)::int, split_part(s,'-',2)::int, split_part(split_part(s,' ',1),'-',3)::int, split_part(coalesce(split_part(s,' ',2),'00:00:00'),':',1)::int, split_part(coalesce(split_part(s,' ',2),'00:00:00'),':',2)::int, split_part(coalesce(split_part(s,' ',2),'00:00:00'),':',3)::int, local_tz ) AS refund_ts_utc FROM refunds_clean WHERE s ~ '^\d{4}-\d{2}-\d{2}(\s\d{2}:\d{2}(:\d{2})?)?$' AND split_part(s,'-',2)::int BETWEEN 1 AND 12 AND split_part(split_part(s,' ',1),'-',3)::int BETWEEN 1 AND EXTRACT(day FROM (date_trunc('month', make_date( split_part(s,'-',1)::int, split_part(s,'-',2)::int, 1 )) + interval '1 month - 1 day')) ), refunds_mark AS ( -- 反连接去重:同 (order_id, refund_ts_utc, amount) 保留首行 SELECT order_id, amount, refund_ts_utc, ROW_NUMBER() OVER ( PARTITION BY order_id, refund_ts_utc, amount ORDER BY order_id ) AS rn FROM refunds_valid ), refunds_dedup AS ( SELECT order_id, amount, refund_ts_utc FROM refunds_mark WHERE rn = 1 ),

-- 3) 聚合维度统一为 UTC 月 monthly_orders AS ( SELECT DATE_TRUNC('month', order_ts_utc) AS month_utc, currency, SUM(total) AS order_total FROM orders_dedup GROUP BY DATE_TRUNC('month', order_ts_utc), currency ), monthly_refunds AS ( -- 半连接:只关联一次拿到 currency,避免行数膨胀 SELECT DATE_TRUNC('month', rd.refund_ts_utc) AS month_utc, o.currency, SUM(rd.amount) AS refund_total FROM refunds_dedup rd INNER JOIN orders_dedup o ON o.id = rd.order_id GROUP BY DATE_TRUNC('month', rd.refund_ts_utc), o.currency ),

-- 4) 净额 net_revenue AS ( SELECT mo.month_utc, mo.currency, mo.order_total, COALESCE(mr.refund_total, 0) AS refund_total, mo.order_total - COALESCE(mr.refund_total, 0) AS net_total FROM monthly_orders mo LEFT JOIN monthly_refunds mr ON mr.month_utc = mo.month_utc AND mr.currency = mo.currency ) SELECT * FROM net_revenue ORDER BY month_utc, currency;

  • 预期结果:
    • 非法日期被剔除,CAST 不再失败
    • 订单、退款都转为 UTC,月度边界统一
    • 退款按 (order_id, refund_ts_utc, amount) 去重
    • 按月聚合无非聚合列,报错消失
    • 净额为订单月度合计减去退款月度合计

B. 索引与约束建议

  • 行动描述:提高聚合与连接性能,防止重复插入

  • 适用代码片段: -- 如果能迁移到标准类型(强烈建议): -- orders(order_ts_utc timestamptz), refunds(refund_ts_utc timestamptz) CREATE INDEX idx_orders_ts_currency ON orders (order_ts_utc, currency); CREATE INDEX idx_refunds_ts_order ON refunds (refund_ts_utc, order_id);

    -- 防重复(如业务允许同键唯一) CREATE UNIQUE INDEX CONCURRENTLY u_refunds_dedupe ON refunds (order_id, refund_date, amount);

  • 预期结果:

    • 聚合与连接走索引,减少全表扫描
    • 重复退款写入被阻止(或更易检测)

C. EXPLAIN 分析

  • 行动描述:验证执行计划是否为“两个聚合 + 一次连接”
  • 适用代码片段: EXPLAIN (ANALYZE, BUFFERS) WITH params AS (...), orders_clean AS (...), orders_valid AS (...), orders_dedup AS (...), refunds_clean AS (...), refunds_valid AS (...), refunds_mark AS (...), refunds_dedup AS (...), monthly_orders AS (...), monthly_refunds AS (...), net_revenue AS (...) SELECT * FROM net_revenue;
  • 预期结果:
    • 看到 HashAggregate/GroupAggregate 出现在 monthly_orders 与 monthly_refunds
    • 看到 HashJoin/Left Join 用于 month_utc + currency
    • 索引命中在 JOIN 与过滤阶段(如 idx_refunds_ts_order)

4️⃣ 测试与验证 ✅ A. 样例数据与期望值断言

  • 行动描述:插入边界与重复样例,验证净额

  • 适用代码片段: -- 样例(请在测试库执行) INSERT INTO orders(id, user_id, order_date, total, currency) VALUES (1, 101, '2024-10-31 23:30', 100, 'USD'), -- UTC (2, 102, '2024/11/01', 50, 'USD'), (3, 103, '2024/11/31', 70, 'USD'); -- 非法,将被剔除

    INSERT INTO refunds(order_id, refund_date, amount) VALUES (1, '2024-11-01 08:00', 30), -- 本地 Asia/Shanghai,UTC 为 2024-11-01 00:00 (1, '2024-11-01 08:00', 30); -- 重复

  • 预期结果:

    • 2024-10 月:order_total=100,refund_total=0,net_total=100
    • 2024-11 月:order_total=50,refund_total=30,net_total=20
    • id=3 的订单被过滤,不参与聚合
    • 重复退款仅计一次

B. 边界月与跨月订单检查

  • 行动描述:验证本地→UTC 月度对齐
  • 适用代码片段: SELECT DATE_TRUNC('month', make_timestamptz(2024,11,1,8,0,0,'Asia/Shanghai')) AS month_utc_refund, DATE_TRUNC('month', TIMESTAMPTZ '2024-10-31 23:30+00') AS month_utc_order;
  • 预期结果:
    • refund 月为 2024-11-01 的月
    • order 月为 2024-10-01 的月

C. 重复键计数前后对比

  • 行动描述:确认 dedupe 生效
  • 适用代码片段: -- 去重前 SELECT COUNT() AS refunds_raw FROM refunds; -- 去重后(CTE) WITH refunds_valid AS (...), refunds_mark AS (...) SELECT COUNT() AS refunds_after FROM refunds_mark WHERE rn = 1;
  • 预期结果:
    • refunds_after < refunds_raw,重复已消除

D. 正确性断言

  • 行动描述:对净额做断言
  • 适用代码片段: WITH final AS ( -- 引用上文 net_revenue CTE 的最终 SELECT SELECT * FROM net_revenue ) SELECT SUM(CASE WHEN month_utc = DATE '2024-10-01' THEN (net_total = 100)::int ELSE 0 END) AS ok_oct, SUM(CASE WHEN month_utc = DATE '2024-11-01' THEN (net_total = 20)::int ELSE 0 END) AS ok_nov FROM final;
  • 预期结果:ok_oct=1,ok_nov=1(通过)

交互式评分 🎯

  • 性能:8/10(索引与两次聚合 + 一次连接,稳定)
  • 正确性:9/10(UTC 统一、聚合维度正确、重复去重)
  • 可维护性:8/10(CTE 清洗清晰;建议未来迁移到强类型列或物化视图并加索引)

可迭代建议

  • 将清洗逻辑固化为物化视图(mv_orders_clean、mv_refunds_clean),并在其上建立索引
  • 若业务允许,增加唯一约束防止重复退款写入
  • 将本地时区作为参数配置项,避免硬编码

任务完成!获得10分。 当前等级:初级调试员 下一等级:代码修复者(距离20分)

  1. 错误识别 🔎
  • 行动描述
    • 指出根因:对缺失字段调用 match;非一致嵌套结构;Unicode 清洗对表情/零宽字符处理不全;重度正则导致性能与内存问题;分页边界重复抓取。
  • 适用代码片段
    • 问题示例(反例)
      // 反例:text 可能为 undefined,直接 .match 触发 TypeError
      const phrases = item.text.match(/[^\s.!?]+/g);
      
  • 预期结果
    • 明确问题来源,确定需要:模式检查与 schema 校验、可选链与默认值、预编译正则与轻量清洗、流式生成器处理、幂等游标分页。
  1. 调试步骤 🧭
  • 行动描述
    • 步骤化定位:为入参与 API 响应做 schema 校验;在处理前记录字段分布;替换易爆点为可选链与默认值;对正则开销与内存做采样;模拟分页游标重复场景。
  • 适用代码片段
    // 1) 轻量 schema 校验与安全访问
    import { z } from "zod";
    
    const ItemSchema = z.object({
      id: z.union([z.string(), z.number()]).optional(),
      text: z.string().optional(),
      metadata: z.object({ lang: z.string().optional() }).optional(),
    });
    
    const PageSchema = z.object({
      items: z.array(ItemSchema),
      nextCursor: z.string().nullable().optional(),
    });
    
    // 2) 字段分布日志与缺失统计
    function logItemShape(items: unknown[]) {
      let missingText = 0, hasHTML = 0;
      for (const it of items) {
        const parsed = ItemSchema.safeParse(it);
        if (!parsed.success || !parsed.data?.text) missingText++;
        else if (/<[^>]+>/.test(parsed.data.text)) hasHTML++;
      }
      console.log({ missingText, hasHTML, total: items.length });
    }
    
    // 3) 预编译正则与可选链
    const RE = {
      TAG: /<[^>]*>/g,                               // 去 HTML 标签
      ZERO_WIDTH: /[\u200B-\u200D\uFEFF]/g,          // 零宽字符
      CONTROL: /[\p{Cc}\p{Cf}]/gu,                   // 控制与格式字符
      EMOJI: /\p{Extended_Pictographic}/gu,          // 表情符号
      MULTI_WS: /\s{2,}/g,                           // 多重空白
      SPLIT: /[.!?;:,、。!?;]+/u                 // 句边界
    };
    
    // 4) 简单采样:执行时长与内存
    function samplePerf(label: string) {
      const start = performance.now();
      const memStart = process.memoryUsage().heapUsed;
      return () => {
        const ms = performance.now() - start;
        const memDelta = process.memoryUsage().heapUsed - memStart;
        console.log(`${label}: ${ms.toFixed(1)}ms, Δheap=${(memDelta/1024/1024).toFixed(2)}MB`);
      };
    }
    
    // 5) 模拟分页重复游标
    function makeFakePage(cursor?: string) {
      if (!cursor) return { items: [{ text: "A" }], nextCursor: "c2" };
      if (cursor === "c2") return { items: [{ text: "B" }], nextCursor: null };
      return { items: [], nextCursor: null };
    }
    
  • 预期结果
    • 能复现并量化问题;在安全访问与预编译下,消除 TypeError,明确正则和分页的负载与缺陷。
  1. 解决方案实施 🛠️
  • 行动描述
    • 建立流式管道:分页 fetch 异步生成器 + 规范化清洗 + 短语提取 + 去重;实现幂等游标;保守清洗 Unicode;优化正则与内存占用。
  • 适用代码片段
    // 可执行示例:Node.js ≥18,TypeScript
    // npm i zod
    // 运行:ts-node index.ts
    
    import { z } from "zod";
    
    type Item = z.infer<typeof ItemSchema>;
    type Page = z.infer<typeof PageSchema>;
    
    const RE = {
      TAG: /<[^>]*>/g,
      ZERO_WIDTH: /[\u200B-\u200D\uFEFF]/g,
      CONTROL: /[\p{Cc}\p{Cf}]/gu,
      EMOJI: /\p{Extended_Pictographic}/gu,
      MULTI_WS: /\s{2,}/g,
      SPLIT: /[.!?;:,、。!?;]+/u,
    };
    
    function htmlStrip(s: string): string {
      // 简易去标签;实体仅做部分解码以控复杂度
      return s
        .replace(RE.TAG, " ")
        .replace(/&nbsp;/g, " ")
        .replace(/&amp;/g, "&")
        .replace(/&lt;/g, "<")
        .replace(/&gt;/g, ">");
    }
    
    function normalizeText(raw: string, lang?: string): string {
      // NFC 归一、去标签、去控制/零宽/表情、统一空白
      const nfc = raw.normalize("NFC");
      const stripped = htmlStrip(nfc);
      const noZW = stripped.replace(RE.ZERO_WIDTH, "");
      const noCtrl = noZW.replace(RE.CONTROL, "");
      const noEmoji = noCtrl.replace(RE.EMOJI, "");      // 如需保留表情,跳过此步
      const sp = noEmoji.replace(RE.MULTI_WS, " ").trim();
      // 根据语言选择是否大小写归一(示例:英语小写,中文不变)
      const lower = lang && /^en\b/i.test(lang) ? sp.toLowerCase() : sp;
      return lower;
    }
    
    function extractPhrases(text: string, lang?: string): string[] {
      // 以句界分割,再按词界简单裁剪,避免重度正则
      const sentences = text.split(RE.SPLIT).map(s => s.trim()).filter(Boolean);
      const phrases: string[] = [];
      for (const s of sentences) {
        // 进一步按空白切分并重组短语(2-10词),避免超长
        const tokens = s.split(/\s+/).filter(Boolean);
        if (tokens.length === 0) continue;
        // 简化:整句作为短语,但限制最大长度
        const maxLen = 15; // 字符长度限制以控内存
        const p = s.length > maxLen ? s.slice(0, maxLen).trim() : s;
        phrases.push(p);
      }
      return phrases;
    }
    
    // 幂等游标分页:避免最后一页重复抓取;游标与 item.id 双重去重
    async function* fetchPages(baseUrl: string, startCursor?: string): AsyncGenerator<Item[]> {
      const seenCursors = new Set<string | null>();
      let cursor: string | undefined = startCursor;
    
      while (true) {
        const url = new URL(baseUrl);
        if (cursor) url.searchParams.set("cursor", cursor);
    
        const res = await fetch(url, { headers: { "accept": "application/json" } });
        if (!res.ok) throw new Error(`HTTP ${res.status}`);
        const json = await res.json();
        const parsed = PageSchema.safeParse(json);
        if (!parsed.success) throw new Error("Page schema invalid");
    
        const page = parsed.data;
        const nextCursor = page.nextCursor ?? null;
    
        // 幂等:若游标重复或为 null,终止
        if (seenCursors.has(nextCursor)) break;
        seenCursors.add(nextCursor);
    
        yield page.items;
    
        if (nextCursor === null) break;
        cursor = nextCursor || undefined;
      }
    }
    
    // 处理管道:流式、去重(短语级)
    async function* phrasePipeline(baseUrl: string, startCursor?: string): AsyncGenerator<string> {
      const seenPhrase = new Set<string>();
      for await (const items of fetchPages(baseUrl, startCursor)) {
        for (const it of items) {
          const raw = it?.text ?? ""; // 可选链 + 默认值
          if (!raw) continue;
    
          const lang = it?.metadata?.lang;
          const norm = normalizeText(raw, lang);
          if (!norm) continue;
    
          for (const p of extractPhrases(norm, lang)) {
            const key = p; // 归一后的短语作为键
            if (key && !seenPhrase.has(key)) {
              seenPhrase.add(key);
              yield key;
            }
          }
        }
      }
    }
    
    // 使用示例
    async function main() {
      const stop = samplePerf("pipeline");
      const baseUrl = "https://api.example.com/events";
      const out: string[] = [];
      for await (const phrase of phrasePipeline(baseUrl)) {
        out.push(phrase);
        // 可替换为写文件/入库
      }
      console.log(`phrases=${out.length}`);
      stop();
    }
    
    // 启动
    if (require.main === module) {
      main().catch(err => {
        console.error(err);
        process.exit(1);
      });
    }
    
    // 简易性能采样
    function samplePerf(label: string) {
      const start = performance.now();
      const m0 = process.memoryUsage().heapUsed;
      return () => {
        const ms = performance.now() - start;
        const m1 = process.memoryUsage().heapUsed;
        console.log(`${label}: ${ms.toFixed(1)}ms, heap=${(m1/1024/1024).toFixed(2)}MB, Δ=${((m1-m0)/1024/1024).toFixed(2)}MB`);
      };
    }
    
  • 预期结果
    • 不再触发 TypeError;短语从多页流式提取并去重;零宽/控制/表情清洗稳定;分页不重复最后一页;正则压力降低,内存峰值下降。
  1. 测试与验证 ✅
  • 行动描述
    • 编写 Jest 单测与属性测试;加入性能与内存快照;模拟分页幂等。
  • 适用代码片段
    // __tests__/normalize.test.ts
    import { normalizeText } from "../index";
    
    test("removes zero-width, controls, emojis, tags", () => {
      const raw = "<b>Hello&nbsp;</b>\u200B\uFEFF😀 world\t";
      const out = normalizeText(raw, "en");
      expect(out).toBe("hello world");             // 英文小写、清洗完成
      expect(/[\u200B-\u200D\uFEFF]/.test(out)).toBe(false);
      expect(/\p{Extended_Pictographic}/u.test(out)).toBe(false);
      expect(/<[^>]*>/.test(out)).toBe(false);
    });
    
    // __tests__/phrases.test.ts
    import { extractPhrases } from "../index";
    
    test("splits sentences by multilingual punctuation", () => {
      const out = extractPhrases("点击按钮。Submit form! 完成支付?");
      expect(out.length).toBeGreaterThanOrEqual(3);
    });
    
    // __tests__/paging.test.ts
    import { PageSchema } from "../index";
    
    test("page schema validates items with optional fields", () => {
      const ok = PageSchema.safeParse({ items: [{ text: "x" }, { metadata: { lang: "zh" } }], nextCursor: null });
      expect(ok.success).toBe(true);
    });
    
    // 属性测试(fast-check)
    // npm i -D fast-check jest @types/jest ts-node
    import fc from "fast-check";
    import { normalizeText } from "../index";
    
    test("normalize is total and removes control chars", () => {
      fc.assert(
        fc.property(
          fc.string(), fc.string(),
          (raw, lang) => {
            const out = normalizeText(raw, lang);
            // 不抛错且无控制/零宽
            expect(out).not.toBeUndefined();
            expect(/[\p{Cc}\p{Cf}]/u.test(out)).toBe(false);
            // 长度不剧增(避免爆炸式扩张)
            expect(out.length).toBeLessThanOrEqual(Math.max(raw.length, 1000));
          }
        ),
        { verbose: true }
      );
    });
    
    // 性能与内存基准(示例)
    // __tests__/perf.test.ts
    import { normalizeText, extractPhrases } from "../index";
    
    test("perf baseline", () => {
      const samples = Array.from({ length: 5000 }, (_, i) => `<p>Event ${i} 😀 &nbsp; 点击按钮,提交表单。</p>`);
      const t0 = performance.now();
      const m0 = process.memoryUsage().heapUsed;
      let count = 0;
      for (const s of samples) {
        const n = normalizeText(s);
        count += extractPhrases(n).length;
      }
      const t1 = performance.now();
      const m1 = process.memoryUsage().heapUsed;
      const qps = samples.length / ((t1 - t0) / 1000);
      const memDeltaMB = (m1 - m0) / 1024 / 1024;
      console.log({ qps: Math.round(qps), memDeltaMB: memDeltaMB.toFixed(2), phrases: count });
      expect(qps).toBeGreaterThan(8000);      // 目标 QPS(示例阈值)
      expect(memDeltaMB).toBeLessThan(30);    // 内存增长上线(示例阈值)
    });
    
    // 分页幂等测试(模拟 fetch)
    // __tests__/idempotent.test.ts
    import { phrasePipeline } from "../index";
    
    const pages = [
      { items: [{ text: "A" }], nextCursor: "c2" },
      { items: [{ text: "B" }], nextCursor: null },
      { items: [{ text: "B" }], nextCursor: null }, // 重复
    ];
    let callIdx = 0;
    // @ts-ignore
    global.fetch = async () => ({ ok: true, json: async () => pages[Math.min(callIdx++, pages.length - 1)] });
    
    test("no duplicate last page", async () => {
      const phrases: string[] = [];
      for await (const p of phrasePipeline("https://fake")) phrases.push(p);
      expect(phrases).toEqual(["a", "b"]); // 英文小写归一
    });
    
  • 预期结果
    • 所有测试通过;归一与清洗稳定;QPS 提升、内存峰值受控;分页幂等不重复。
  1. 参与式积分与迭代指标 🎯
  • 行动描述
    • 设定量化指标,驱动迭代。
  • 适用代码片段
    // 指标计算(示例)
    type Metrics = {
      parseSuccessRate: number;  // 成功解析 / 总数
      qps: number;               // 每秒处理条数
      peakMemMB: number;         // 峰值内存 MB
    };
    
    function score(m: Metrics): number {
      let pts = 0;
      if (m.parseSuccessRate >= 0.99) pts += 4; else if (m.parseSuccessRate >= 0.97) pts += 2;
      if (m.qps >= 8000) pts += 4; else if (m.qps >= 6000) pts += 2;
      if (m.peakMemMB <= 256) pts += 2; else if (m.peakMemMB <= 512) pts += 1;
      return pts; // 满分 10
    }
    
  • 预期结果
    • 可视化迭代收益:解析成功率提升、QPS 提升、内存占用下降;用于验收与下一轮优化。

任务完成!获得10分。 当前等级:初级调试员 下一等级:代码修复者(距离20分)

示例详情

该提示词已被收录:
“AI工程师必备:高效建模与数据处理提示词合集”
覆盖建模到评估关键环节,助你快速构建高性能模型
√ 立即可用 · 零学习成本
√ 参数化批量生成
√ 专业提示词工程师打磨

解决的问题

提供一个强大的解决方案,帮助用户轻松、高效地解决数据相关的编程错误问题。在用户遇到复杂数据处理错误时,通过应用该提示词,指导他们精确定位问题原因,同时提供步骤化调试指导及代码示例,让用户能快速修复问题并验证解决方案的有效性。这一提示词特别适合以数据处理为核心任务的编程开发者、分析师和工程师们,从而节省调试时间并提高工作效率。

适用用户

数据科学家

帮助他们快速解决数据清理、转换或分析过程中遇到的编程错误,提升工作效率,专注于关键数据洞察。

数据工程师

支持数据管道构建和优化,快速修复数据传输及存储错误,确保数据完整性与流通顺畅。

AI开发者

诊断与解决模型开发和训练环节中的数据问题,减少因错误导致的时间浪费或性能下降。

特征总结

快速定位数据相关编程错误,提供清晰原因分析和专业问题识别。
一键生成分步骤调试指南,从错误发现到解决全程无忧。
附带实用代码示例,手把手解决错误,无需反复查阅文档。
自动优化调试流程,通过上下文理解为用户提供最佳修复建议。
确保数据处理完整性,帮助保持工作流的高效与顺畅。
支持多种编程语言与数据处理场景,应用广泛且灵活。
以任务完成奖励机制激励使用,调试过程更具趣味性。
结构化输出调试内容,易懂易执行,适合不同技术背景的用户。
实时验证解决方案效果,确保修复后的结果准确无误。
以专家调试员的视角,提供专业化服务,无缝支持数据分析目标。

如何使用购买的提示词模板

1. 直接在外部 Chat 应用中使用

将模板生成的提示词复制粘贴到您常用的 Chat 应用(如 ChatGPT、Claude 等),即可直接对话使用,无需额外开发。适合个人快速体验和轻量使用场景。

2. 发布为 API 接口调用

把提示词模板转化为 API,您的程序可任意修改模板参数,通过接口直接调用,轻松实现自动化与批量处理。适合开发者集成与业务系统嵌入。

3. 在 MCP Client 中配置使用

在 MCP client 中配置对应的 server 地址,让您的 AI 应用自动调用提示词模板。适合高级用户和团队协作,让提示词在不同 AI 工具间无缝衔接。

AI 提示词价格
¥25.00元
先用后买,用好了再付款,超安全!

您购买后可以获得什么

获得完整提示词模板
- 共 397 tokens
- 4 个可调节参数
{ 编程语言 } { 数据处理任务 } { 错误描述 } { 数据来源类型 }
获得社区贡献内容的使用权
- 精选社区优质案例,助您快速上手提示词
使用提示词兑换券,低至 ¥ 9.9
了解兑换券 →
限时半价

不要错过!

半价获取高级提示词-优惠即将到期

17
:
23
小时
:
59
分钟
:
59