热门角色不仅是灵感来源,更是你的效率助手。通过精挑细选的角色提示词,你可以快速生成高质量内容、提升创作灵感,并找到最契合你需求的解决方案。让创作更轻松,让价值更直接!
我们根据不同用户需求,持续更新角色库,让你总能找到合适的灵感入口。
本提示词专为程序设计师打造,提供从算法构思到优化的全流程支持。通过系统化的分析框架,帮助设计师在复杂业务场景下构建高性能算法方案。核心优势包括:基于问题特征的算法选型指导、多维度复杂度平衡分析、结构化伪代码表达以及针对性的优化策略建议。特别适用于大数据处理、实时计算和高并发场景,确保算法设计既满足性能要求又具备良好的可维护性。
目标是在峰值约25万QPS的高并发支付交易流中,以单请求<20ms(p99<30ms、p50<10ms)、CPU<3ms、内存<1MB完成实时风险评分与拦截决策。方案采用“分层早停 + 特征滑窗聚合 + 规则与梯度树混合”的架构,支持多源事件融合(设备指纹、地理位置、账户画像、行为时序)、会话级特征维护、冷启动兜底与自适应阈值、在线AUC监控与模型热更新。核心思想:
适用场景:实时支付风控、频繁乱序与延迟标签场景、高基数类别与重尾数值特征、需要强解释与合规的金融风控。
入口与最小化数据原则
多源事件融合与缺失降级
滑动窗口聚合与会话级特征
规则引擎“早停层”(高优先级、可审计)
特征向量构建与一致性保障
模型层(GBDT混合)
自适应阈值与三段决策
在线监控与模型热更新
容灾与合规
注意:以下为平台无关的标准化伪代码,不涉及具体编程语言。
Function ProcessEvent(E): T0 = Now() K = ExtractKeys(E) // account_id, device_id, ip, merchant_id, geo, session_id Meta = SanitizeAndMinimize(E)
// L1高命中融合;L2降级
Profile = L1_KV_Get(K)
if Profile.miss:
Profile = L2_KV_Get_WithTimeout(K)
if Profile.timeout:
Profile = UseLastSnapshotOrMinimalProfile()
SessionCtx = SessionCache_Get(K.session_id)
WinFeat = AggregateWindows(K, E.event_time)
F = BuildFeatures(E, Profile, SessionCtx, WinFeat)
F = ConsistentTransform(F, FeatureSchemaVersion)
// 规则早停层
RuleResult = EvaluateRules(F, K)
if RuleResult.decision in {ALLOW, REJECT}:
LogDecision(E, F, RuleResult, explain=RuleResult.explain)
UpdateStatesPostDecision(K, E, RuleResult)
return RuleResult.decision, RuleResult.score, RuleResult.explain
// 模型评分
S = ScoreGBDT(F) // quantized shallow trees
S = CalibrateScore(S) // post-calibration
Explain = ComputeContributions(F, S) // top-K features
// 自适应阈值
Segment = DeriveSegment(K, E)
(θ_allow, θ_reject) = GetAdaptiveThresholds(Segment)
Decision = BandingDecision(S, θ_allow, θ_reject)
LogDecision(E, F, Decision, explain=Explain)
UpdateStatesPostDecision(K, E, Decision)
AsyncMonitoringUpdate(E, S) // deferred until label arrives
EnsureLatencyBudget(T0) // early returns / timeouts honored
return Decision, S, Explain
Function AggregateWindows(K, event_time): // 环形桶逐窗口更新;迟到纠偏在L范围内 for each dimension D in {account, device, ip, merchant, geo, session}: for each window W in {1m,5m,15m,1h,24h,7d}: Bucket = RingBuffer[D][W].LocateBucket(event_time) Bucket.UpdateCountsAndSums(K, event_time) return ComposeWindowFeatures(K)
Function EvaluateRules(F, K): // 优先级有序;可审计 if BlacklistFilter_Hit(K or F): return Decision(REJECT, score=1.0, explain={"rule":"blacklist","keys":K})
if WhitelistFilter_Hit(K or F):
return Decision(ALLOW, score=0.0, explain={"rule":"whitelist","keys":K})
if VelocityExceeded(F) or GeoJumpAnomaly(F) or SessionFailureSpike(F):
return Decision(REJECT, score=0.95, explain={"rule":"velocity/geo/session","features":Subset(F)})
return Decision(UNDECIDED)
Function ScoreGBDT(F): // 量化阈值匹配;浅树高效 s = 0 for tree in Model.Trees: node = tree.root while node not leaf: v = F[node.feature] node = (v <= node.threshold) ? node.left : node.right s += node.leaf_value return Normalize(s)
Function GetAdaptiveThresholds(Segment): // t-digest/分桶分位;成本敏感微调 base = SegmentScoreDistribution[Segment] θ_allow = base.quantile(q_allow) - delta_allow_cost θ_reject = base.quantile(q_reject) + delta_reject_cost return (Clamp(θ_allow), Clamp(θ_reject))
Function AsyncMonitoringUpdate(E, S): // 分桶AUC与漂移监控(标签延迟) bucket = ScoreBucketize(S) MonitoringCounters.Update(bucket, pending_label_ref=E.id)
Function UpdateStatesPostDecision(K, E, Decision): // 会话、窗口、名单与审计 SessionCache_Update(K.session_id, E, Decision) RingBuffers_AdvanceIfNeeded(E.event_time) if Decision == REJECT and ConfirmedFraudLater(E): BlacklistDelta.Apply(K or fingerprint)
时间复杂度
空间复杂度
实际性能考量因素
分层早停与预算控制
特征工程与一致性
数据结构与内存
模型侧性能与解释
自适应阈值与鲁棒性
乱序与迟到标签处理
灰度与热更新
容灾与合规
参数建议(可作为初始标定)
该方案在严格的延迟与资源约束下,兼顾可解释性与效果,通过分层早停、滑窗聚合与轻量模型实现高效实时欺诈检测与拦截,同时保证灰度热更、在线监控与跨地域容灾的业务要求。
面向实时广告竞价的超低延迟CTR预估与动态出价方案,采用“请求内微批处理 + 多路模型路由 + 稀疏向量化推断 + 强一致的频控/去重 + 风险约束出价”的整体架构,在纯CPU环境下通过SIMD指令和内存布局优化,确保在p99<8ms、p50<3ms的约束下完成端到端流程(特征抽取、模型推断、出价决策、审计留痕)。
核心思想:
适用场景:高QPS(峰值≈300万)、请求含大量候选(50200)、特征稀疏(≈120维)、用户上下文高命中(>98%)、概念漂移显著、标签延迟(16小时)、GPU不可用且内存受限(<64GB),极致性能优先。
整体流程分为五个阶段,所有关键路径均为常数或线性时间且高度向量化:
模型滚动更新:使用RCU(Read-Copy-Update)双缓冲权重加载与原子指针切换,无停机;在线校准器分桶更新,标签延迟对齐窗口(1~6h)。
以下为结构化伪代码,平台无关、无具体语言细节:
FUNCTION ProcessRequest(request):
start_ts = Now()
deadline = start_ts + 10ms
// 1) 上下文拉取
user_ctx = L1Cache.Get(request.user_id)
IF user_ctx == NULL THEN
user_ctx = L2Cache.TryGetWithin(request.user_id, 1ms)
IF user_ctx == NULL THEN
user_ctx = DefaultContext()
SetFlag(request, "CTX_MISS")
ENDIF
ENDIF
// 2) 请求内去重与频控预筛
seen = BitsetOrSmallHash()
filtered = []
FOR cand IN request.candidates:
fp = Fingerprint(cand.creative_id, cand.ad_id, cand.campaign_id)
IF seen.Contains(fp) THEN
LogReason(cand, "DEDUP_DROP")
CONTINUE
ENDIF
seen.Insert(fp)
freq_key = (request.user_id, cand.campaign_id)
freq_count = AtomicCounters.Read(freq_key)
IF freq_count >= GetFreqCap(cand.campaign_id) THEN
LogReason(cand, "FREQ_DROP")
CONTINUE
ENDIF
budget_key = cand.campaign_id
budget_state = AtomicBudget.Read(budget_key)
IF budget_state.RemainingTokens <= 0 THEN
LogReason(cand, "BUDGET_DROP")
CONTINUE
ENDIF
filtered.Append(cand)
ENDFOR
IF filtered.IsEmpty() THEN
RETURN NoBid("ALL_FILTERED")
ENDIF
// 3) 特征装配与近似
batch = InitBatchBuffer(size=filtered.Size())
FOR i FROM 0 TO filtered.Size()-1:
cand = filtered[i]
user_feats = AssembleUserFeatures(user_ctx)
ad_feats = AssembleAdFeatures(cand)
ctx_feats = AssembleContextFeatures(request)
merged = SparseMerge(user_feats, ad_feats, ctx_feats) // CSR indices + values
approx = ApproximateExpensive(merged) // quantized embeddings, cached stats
normed = NormalizeAndClip(approx)
gate_vec = GateVector(normed) // missing ratio, freshness, cold flag
batch.Load(i, indices=normed.indices, values=normed.values, gate=gate_vec, meta=cand.meta)
ENDFOR
// 4) 多路模型路由与CTR推断(批式向量化)
ctr_list = Array(size=filtered.Size())
route_list = Array(size=filtered.Size())
GROUPS = BatchPartition(batch, SIMDWidthAligned()) // e.g., 16/32
FOR group IN GROUPS:
gates = group.gates
mask_fast = GatesSelect(gates, "FAST")
mask_tree = GatesSelect(gates, "TREE")
mask_cold = GatesSelect(gates, "COLD")
// FAST: GLM + FM(量化权重 + LUT Sigmoid)
IF mask_fast.NonEmpty():
idxs, vals = group.GetSparse(mask_fast)
z_glm = SparseDotInt8Quantized(GLMWeights, idxs, vals) // SIMD FMA
z_fm = SparseFMInt8Quantized(FMWeights, idxs, vals, k=8) // SIMD reduction
z = z_glm + z_fm + Bias
p = SigmoidApproxLUT(z)
// 校准
p_cal = Calibrator.Apply(p, SegmentKey(user_ctx, group.meta))
ctr_list.Update(mask_fast, p_cal)
route_list.Update(mask_fast, "FAST")
// TREE: Oblivious Trees(位运算,无分支)
IF mask_tree.NonEmpty():
p_tree = ObliviousTreePredict(group, TreeBundle) // bitmask eval
p_cal = Calibrator.Apply(p_tree, SegmentKey(user_ctx, group.meta))
ctr_list.Update(mask_tree, p_cal)
route_list.Update(mask_tree, "TREE")
// COLD: 启发式兜底
IF mask_cold.NonEmpty():
p_cold = HeuristicCTR(user_ctx, group.meta) // population × creative × slot
p_cal = Calibrator.Apply(p_cold, ColdSegmentKey())
ctr_list.Update(mask_cold, p_cal)
route_list.Update(mask_cold, "COLD")
ENDFOR
// 5) 出价决策与强一致预留 + 审计留痕
bids = Array(size=filtered.Size())
scores = Array(size=filtered.Size())
FOR i FROM 0 TO filtered.Size()-1:
cand = filtered[i]
p = ctr_list[i]
// 下置信界(Wilson或Beta后验)
lcb = CTR_LCB(p, Uncertainty(cand, route_list[i], user_ctx))
pacing = BudgetPacingFactor(AtomicBudget.Read(cand.campaign_id)) // 0..1
risk = RiskPenalty(cand, seen, freq_margin=RemainingFreqMargin(request.user_id, cand.campaign_id))
Vclick = ValuePerClick(cand) // eCPC/eCPA目标映射
bid = Clamp(BidMin(cand), BidMax(cand), Alpha(cand) * Vclick * lcb * pacing * risk)
bids[i] = bid
scores[i] = ExpectedROI(lcb, Vclick, bid)
ENDFOR
// 选择Top-1(或Top-N),尝试原子预留频控/预算
order = ArgSortDescending(scores)
selected = NULL
FOR idx IN order:
cand = filtered[idx]
// 强一致:原子检查+预留(失败则尝试下一个)
ok_freq = AtomicCounters.TryReserve((request.user_id, cand.campaign_id), 1) // CAS
IF NOT ok_freq THEN
LogReason(cand, "FREQ_RESERVE_FAIL")
CONTINUE
ENDIF
ok_budget = AtomicBudget.TryConsume(cand.campaign_id, tokens=BudgetTokens(bids[idx]))
IF NOT ok_budget THEN
AtomicCounters.Rollback((request.user_id, cand.campaign_id), 1)
LogReason(cand, "BUDGET_RESERVE_FAIL")
CONTINUE
ENDIF
selected = (cand, bids[idx], ctr_list[idx], route_list[idx])
BREAK
ENDFOR
IF selected == NULL THEN
RETURN NoBid("RESERVE_FAIL")
ENDIF
// 审计留痕
AuditLog.Emit({
ts: start_ts,
req_id: request.id,
user_id: request.user_id,
cand_id: selected.cand.id,
route: selected.route,
model_version: ModelVersion(route=selected.route),
features_hash: FeaturesHash(batch, selected.cand),
features_snapshot_ref: SnapshotRef(batch, selected.cand),
ctr_raw: ctr_list[SelectedIndex()],
ctr_lcb: CTR_LCB(ctr_list[SelectedIndex()], Uncertainty(...)),
bid: selected.bid,
freq_state: AtomicCounters.Read((request.user_id, selected.cand.campaign_id)),
budget_state: AtomicBudget.Read(selected.cand.campaign_id),
reasons: CollectReasons(request, selected, filtered),
})
// 时间预算保护
IF Now() > deadline THEN
SetFlag(request, "TIME_NEAR_DEADLINE")
ENDIF
RETURN BidResponse(selected.cand, selected.bid)
END FUNCTION
辅助函数与数据结构要点(略述):
指令级与系统级优化策略(按热点归类):
通过以上设计与优化,方案在纯CPU、SIMD向量化下可满足p50<3ms、p99<8ms的端到端延迟目标,同时保障频控与预算强一致、结果可解释与可回放、模型滚动更新无停机,并显著提升收益/成本比的稳健性。
在端侧内存与网络受限、日志高频且模式重复的场景下,本方案采用“结构化模板挖掘 + 轻量在线聚类 + 规则/统计/序列混合检测 + 增量学习与压缩上报”的流式框架。核心思想如下:
该方案专为端侧内存<50MB、CPU<20%、网络抖动/离线、标签稀缺且异常<0.1%的工业级场景设计。
整体采用可中断、可恢复的流水线,模块化如下:
注:为平台无关的通用伪代码,省略具体数据类型实现细节。
Pipeline.OnEvent(event): if BloomSeen(event.hash_short, window=30s): return e = Preprocess(event) // 脱敏、匿名、指纹 t_id, match_quality = TemplateMatch(e.tokens, e.token_types) f = ExtractFeatures(e, t_id) // 长度、字符比、槽位指纹、到达间隔等 UpdateStatistics(t_id, f.timestamp) UpdateMicroClusters(t_id, f) UpdateSequenceModel(prev_template_id, t_id) score = AnomalyScore(t_id, f, match_quality) if score >= Threshold(t_id): act = ActionSynthesis(t_id, f, score) ReportBuffer.Add(ComposeAnomalyRecord(t_id, f, score, act)) ExecuteSafeguardedAction(act) else: AggregateBuffer.Update(t_id, f) // 用于分钟级摘要 prev_template_id = t_id
Preprocess(event): tokens = Tokenize(event.text) tokens = MaskPII(tokens) // 规则掩码 token_types = TypeAnnotate(tokens) // NUM/HEX/STR/TS等 short_hash = HashTruncate(event.text) return { tokens, token_types, short_hash, timestamp=event.ts }
TemplateMatch(tokens, types): sig = BuildSignature(tokens, types) // 稳定位+类型序列 bucket = LSHBuckets.Lookup(sig) best_t, d_min = None, +inf for t in bucket: d = TemplateDistance(tokens, types, t) // 忽略变量位的编辑距离 if d < d_min: best_t, d_min = t, d if d_min <= tau: Templates[best_t].Touch() return best_t, QualityFrom(d_min) else: if Templates.Size < T_max or SpaceSaving.Admit(sig): new_t = CreateTemplate(tokens, types) return new_t.id, QualityFromNew() else: rare_t = Templates.RareClassId return rare_t, QualityFromRare()
ExtractFeatures(e, t_id): inter_arrival = Now() - Templates[t_id].LastSeenTs var_fingerprints = HashSlots(e.tokens at variable positions) char_ratio = ComputeCharRatios(e.tokens) length = MessageLength(e.tokens) return { inter_arrival, var_fingerprints, char_ratio, length, timestamp=e.timestamp }
UpdateStatistics(t_id, ts): // 频率统计 CMS.TemplateCount.Add(t_id, 1) SS_TopK.Update(t_id, +1) // 时间模式(分钟粒度) MinuteBin = TimeToBin(ts) EWMA[t_id][MinuteBin].Update(+1)
UpdateMicroClusters(t_id, f): clusters = MicroClusters[t_id] c_best, dist = ArgMinCluster(clusters, f) if dist <= eps: c_best.UpdateIncremental(f) // Welford更新 else if clusters.Size < K_max: clusters.Add(NewClusterFrom(f)) else: clusters.ArgMinWeight().Absorb(f) // 以最小权重簇替换/吸收
UpdateSequenceModel(prev_t, t_id): if prev_t is not None: CMS.PairCount.Add((prev_t, t_id), 1) SS_TopPairs.Update((prev_t, t_id), +1)
AnomalyScore(t_id, f, q): s_rule = RuleEngineScore(t_id, f) s_rate = RateChangeScore(EWMA[t_id], current_bin) s_cont = ContentDeviationScore(MicroClusters[t_id], f) s_seq = SequenceRarityScore(prev_template_id, t_id, CMS, SS_TopPairs) // 置信度调节:模板支持度越低,规则与序列权重越高 support = Templates[t_id].Support() w = WeightsFromSupport(support) return w.rules_rule + w.rates_rate + w.conts_cont + w.seqs_seq + BonusFrom(q)
Threshold(t_id): base = Quantile(ScoreHistory[t_id], p=target_p) // 如99.7分位 if DriftDetector[t_id].IsStable(): return base else: return Relax(base) // 漂移期放宽,降低误报
DriftDetector (per template or group): input: EWMA level, variance, srate method: Page-Hinkley with delta, lambda on detection: increase forgetting lambda; reset quantile cache; mark unstable window
ReportFlushTimer (per minute): // 生成摘要块 summary = AggregateBuffer.EmitAndReset() chunk = MakeChunk(summary, AnomalyBatchBuffer.PopUpTo(L)) chunk.id = NextSeqId() chunk.hash = Hash(chunk.payload) WAL.Append(chunk) TryUpload(chunk)
Uploader.TryUpload(chunk): if NetworkAvailable(): Send(chunk) if AckReceived(chunk.id): WAL.CommitUntil(chunk.id) UpdateMerkle(chunk.id, chunk.hash) else: ScheduleRetry(chunk) else: ScheduleRetry(chunk)
CheckpointTimer (periodic): snapshot = Serialize(Templates, MicroClusters, EWMA, CMS, SS, Thresholds, DriftDetectors, UploaderState, Version) Persist(snapshot)
RecoveryOnRestart(): state = LoadLatestSnapshot() WAL.ReplayFrom(state.last_committed_seq)
HotUpgrade(new_version): DualRunWindowStart() RunBothPipelines() if MetricsAligned() and ResourceOK(): StateMigrate(old_state -> new_state) SwitchOver() else: Rollback()
时间复杂度:
最好情况:模板已稳定、匹配命中首个候选桶,O(L)。
平均情况:O(L + BL) + O(K_max) + O(D) ≈ 常数级微秒量级操作。
最坏情况:模板阈值紧、频繁新建,近似O(L + BL) + 小常数;当达到T_max时触发SpaceSaving与模板合并,偶发摊还O(1)。
空间复杂度(端侧额外内存):
合计内存典型占用:15~25MB;上限配置下<40MB,满足<50MB约束。
实际性能考量:
检测质量预期:
该方案以流式、可解释、轻量为核心,兼顾端侧资源、网络波动与工程可维护性,保证异常检测有效性与可操作性。
帮助程序设计师在数据密集、实时与高并发等高要求场景下,快速把模糊需求转化为可落地的高性能算法方案。通过明确角色与闭环任务,引导 AI 交付清晰的算法思路、结构化伪代码、复杂度评估与可操作的优化清单,显著缩短探索时间、降低性能风险、提升交付品质与可维护性;让用户在试用期即可获得可见成果,进而愿意升级为付费使用。
在高并发与实时服务中,借助提示词完成缓存策略、索引设计与队列调度的算法选型;自动生成伪代码与复杂度评估,提前识别瓶颈并输出优化清单。
面向日志、行为数据与流处理,快速设计去重、聚合、窗口计算等算法;得到适配数据规模的方案蓝图与资源衡量,降低计算成本与延迟。
为近似检索、召回排序与特征处理选择合适数据结构;获取通用伪代码、性能取舍说明和迭代建议,缩短从原型到上线的周期。
将模板生成的提示词复制粘贴到您常用的 Chat 应用(如 ChatGPT、Claude 等),即可直接对话使用,无需额外开发。适合个人快速体验和轻量使用场景。
把提示词模板转化为 API,您的程序可任意修改模板参数,通过接口直接调用,轻松实现自动化与批量处理。适合开发者集成与业务系统嵌入。
在 MCP client 中配置对应的 server 地址,让您的 AI 应用自动调用提示词模板。适合高级用户和团队协作,让提示词在不同 AI 工具间无缝衔接。
免费获取高级提示词-优惠即将到期