研讨课小组活动指导:AI伦理案例辩论
Seminar Small-Group Activity Guide: AI Ethics Case Debates
一、学习目标(向学生清晰说明)
Learning objectives (state to students)
- 运用至少两种伦理框架(如结果论、义务论/权利、公平正义、关怀伦理)分析AI案例。
- 构建与反驳论点,区分事实、价值与政策主张。
- 使用可靠证据支持主张,进行规范且尊重的学术辩论。
- 识别多方利益相关者,提出可行的改进与政策建议。
- 进行反思,认识技术选择的权衡与不确定性。
- Apply at least two ethical frameworks (e.g., consequentialism, deontology/rights, justice/fairness, care ethics) to AI cases.
- Build and rebut arguments; distinguish facts, values, and policy claims.
- Support claims with credible evidence in a respectful academic debate.
- Identify stakeholders and craft feasible improvements/policy recommendations.
- Reflect on trade-offs and uncertainty in socio-technical systems.
二、课前准备(教师)
Pre-class preparation (instructor)
-
选案与材料
- 选3–5个适合本班背景的AI伦理案例(见“示例案例”)。
- 为每案准备1页事实包(中立、可核查)、关键术语表、利益相关者图、数据来源清单。
- Provide 3–5 suitable AI ethics cases (see “Sample cases”).
- Prepare a 1-page neutral fact sheet per case, glossary, stakeholder map, and source list.
-
阅读与分工
- 布置预读:基础伦理框架简介、相关政策文件节选(例如:OECD AI原则、UNESCO 2021建议、NIST AI RMF、EU AI Act概览)。
- 预分小组(4–6人)与角色(见下文),并发放评分规约与辩论流程。
- Assign pre-readings: ethical frameworks and policy excerpts (e.g., OECD AI Principles, UNESCO 2021, NIST AI RMF 1.0, EU AI Act overview).
- Pre-assign groups (4–6) and roles; share rubric and debate flow.
-
场地与支持
- 准备计时器、白板/便签、投影(可选)。
- 确保无障碍与差异化支持(字幕、可打印材料、延时等)。
- Set up timer, board/sticky notes, projector (optional).
- Ensure accessibility and accommodations.
三、课堂流程建议(75分钟示例)
Suggested class timeline (75 minutes)
-
破冰与规则(5分钟)
- 明确尊重沟通、基于证据、可纠错的氛围与发言规则。
- Clarify respectful discourse, evidence-based claims, and error-correction norms.
-
案例分配与事实澄清(10分钟)
- 每组领取同一或不同案例包;教师快速答疑事实,不给结论。
- Distribute case packets; instructor clarifies facts without taking positions.
-
小组准备(12分钟)
- 组内分配角色,梳理立场、证据与伦理框架;完成立论提纲。
- Assign roles; outline positions, evidence, and ethical frameworks.
-
辩论环节(20分钟/组)
- 正方立论3’ → 反方立论3’ → 正方反驳3’ → 反方反驳3’ → 交叉质询4’ → 观众提问4’。
- Affirmative 3’ → Negative 3’ → Aff rebuttal 3’ → Neg rebuttal 3’ → Cross-exam 4’ → Audience Q&A 4’.
-
评议与裁决(10分钟)
- 旁听组/评委按评分规约给出理由化裁决与改进建议。
- Audience/judges deliver reasoned verdicts with improvement suggestions.
-
全班复盘(10分钟)
- 提炼共识、分歧、证据空白与政策选项;连接伦理框架与现实决策。
- Synthesize agreements, disagreements, evidence gaps, and policy options; tie frameworks to decisions.
-
快速反思(3–5分钟)
- 学生写下1个观点更新与1条后续问题。
- Students note one changed view and one follow-up question.
四、小组与角色
Groups and roles
- 正方/反方主辩各1人:负责立论与总结。
- 证据与事实核查1人:实时核对数据与引用。
- 质询与回应1人:主导交叉质询与即席回应。
- 计时与规范1人:把控发言时长、提醒规则。
- 记录员1人(可兼任):整理要点与建议。
- Affirmative/Negative lead: openings and closings.
- Evidence checker: verifies data and citations live.
- Cross-exam lead: questions and responses.
- Timekeeper/proceduralist: manages timing and rules.
- Rapporteur: records key points and recommendations.
五、辩论规则与证据标准
Debate rules and evidence standards
-
命题形式
- 使用清晰的政策命题,例如:“本院支持/反对 在公立学校部署人脸识别门禁。”
- Use clear policy resolutions, e.g., “This house supports/opposes deploying facial recognition for access control in public schools.”
-
论证要求
- 每个主张须配证据与伦理理由;标明事实与价值判断的边界。
- Each claim must include evidence and ethical reasoning; separate facts from value judgments.
-
证据来源
- 优先:同行评议研究、权威评估(如NIST、OECD、WHO)、官方报告、具有方法描述的新闻调查。
- Avoid仅凭轶事或无来源数据;口头引用到具体来源标题与年份。
- Prioritize: peer-reviewed studies, reputable assessments (NIST, OECD, WHO), official reports, investigative journalism with methods.
- Avoid anecdote-only claims; provide source titles and years orally.
-
交叉质询
- 只问问题,不做演讲;可要求对方澄清定义、范围、假设与权衡。
- Ask questions, not speeches; probe definitions, scope, assumptions, trade-offs.
-
新证据规则
- 总结陈词不引入全新论点;可强化已提出的证据或框架。
- No new arguments in closing; may reinforce existing points.
-
纠错协议
- 若出现可疑事实,立即“暂停核查”:要求来源;若无法提供,暂不计入裁决。
- If a claim is dubious, pause to request a source; if none, exclude from verdict.
六、伦理框架速览(供学生调用)
Ethical frameworks quick reference
- 结果论(功利):最大化整体福祉,衡量风险/收益与分布。
- 义务论/权利:尊重规则、尊严、隐私、同意与不可手段化。
- 公平与正义:程序正义、结果平等、机会平等、差别影响。
- 关怀伦理:情境、关系与弱势群体的具体需要。
- 美德伦理:组织与开发者的品格、谨慎与责任。
- Consequentialism: maximize welfare; assess risk/benefit and distribution.
- Deontology/Rights: respect rules, dignity, privacy, consent.
- Justice/Fairness: procedural justice, parity of outcomes/opportunity, disparate impact.
- Care ethics: context, relationships, needs of vulnerable groups.
- Virtue ethics: character, prudence, responsibility of actors.
七、示例辩题与简要案例包(供选择与本地化)
Sample resolutions and brief case packets (adapt to context)
-
公共空间人脸识别
- 事实要点:2019年旧金山通过条例,禁止市政机构使用人脸识别。部分评测曾记录在某些算法与人群中存在性能差异;近年来基准测试显示整体性能提升但差异仍需监测。关注:公共安全与误识别风险、隐私与监控、同意与用途限制、审计与申诉机制。
- Resolution example: “Support/oppose municipal use of facial recognition in public spaces.”
- Fact notes: San Francisco’s 2019 ordinance banned city agency use of facial recognition. Historical demographic performance gaps were documented in some algorithms; overall accuracy has improved in benchmarks but disparities require monitoring. Consider public safety vs misidentification, privacy/surveillance, consent and purpose limits, audit/appeal.
-
司法风险评估(如COMPAS)
- 事实要点:2016年ProPublica报道该工具在种族上的错误率差异;开发方当时回应工具在分数校准上公平。研究指出当基准率不同,无法同时满足多种公平指标。关注:预审/量刑影响、透明度与可解释性、纠错与复议。
- Resolution: “Support/oppose the use of proprietary risk assessments in pretrial decisions.”
- Fact notes: 2016 ProPublica reported racial disparities in error rates; vendor argued calibration fairness. With differing base rates, multiple fairness criteria cannot be simultaneously satisfied. Consider impacts on pretrial/ sentencing, transparency, contestability.
-
自动驾驶与安全
- 事实要点:2018年Uber测试车辆在亚利桑那州发生首起涉及自动驾驶测试的行人死亡事故;调查指出安全驾驶员未及时干预、系统对目标分类与制动策略存在问题。关注:测试安全阈值、驾驶员注意、责任分配与事故调查透明度。
- Resolution: “Support/oppose expanding on-road autonomous vehicle testing in urban areas.”
- Fact notes: 2018 Uber AV testing fatality in Arizona; investigations noted safety driver inattention and system classification/braking issues. Consider safety thresholds, human oversight, liability, transparency.
-
预测性警务
- 事实要点:美国圣克鲁斯市于2020年禁止预测性警务;部分城市终止相关项目。关注:犯罪预防效果证据、反馈回路与偏见、社区信任、监督与申诉。
- Resolution: “Support/oppose predictive policing tools for resource allocation.”
- Fact notes: Santa Cruz banned predictive policing in 2020; some cities ended programs. Consider evidence of efficacy, feedback loops and bias, community trust, oversight.
-
生成式AI在教育中的检测与使用
- 事实要点:OpenAI于2023年下线其AI文本检测器,原因是准确率较低;多方提醒避免将检测结果作为唯一依据。关注:学术诚信、学习公平、误报伤害、教学设计。
- Resolution: “Support/oppose mandatory AI-writing detection for student submissions.”
- Fact notes: OpenAI retired its AI text classifier in 2023 for low accuracy; institutions caution against sole reliance. Consider academic integrity, equity, false positives, pedagogy.
提示:提供具体来源列表,鼓励学生在辩论前5分钟标注可引用的关键数据与出处。
Tip: Provide a source list; ask students to flag quotable data with citations during prep.
八、评分规约(可100分制)
Assessment rubric (suggested 100 points)
- 论证质量(25):主张清晰、结构严谨、结论与前提一致。
- 证据运用(20):来源可靠、准确引用、处理不确定性。
- 伦理分析(20):框架运用得当、权衡与正当化充分。
- 利益相关者与影响(15):识别全面、短期/长期与分配效应明确。
- 反驳与回应(10):有效识别并回应对方要点。
- 表达与合作(10):表达清晰、时间管理、团队配合与礼仪。
- Argument quality (25), Evidence use (20), Ethical analysis (20), Stakeholder/impact (15), Rebuttal skill (10), Delivery/teamwork (10).
九、教师主持要点
Facilitation tips
- 明确时间节点,使用可视化计时。
- 保持中立;只澄清事实与流程,不给立场。
- 主动邀请沉默成员发言;使用“轮流发言”或“写后说”降低焦虑。
- 对敏感议题设置“先定义再讨论”的程序,降低误解。
- 遇到错误信息,立即启用“暂停核查”,再继续。
- Enforce timing; remain neutral.
- Invite quieter voices; use round-robin or write-then-speak.
- Define terms before debate on sensitive points.
- Use pause-and-check for questionable claims.
十、学术诚信与AI工具使用
Academic integrity and AI tool use
- 允许用途:头脑风暴、找参考线索、生成对立观点;要求标注所用工具与用途。
- 禁止用途:将AI生成文本作为原创证据或数据;不得伪造引用。
- 数据与隐私:不上传个人敏感信息或未公开学生作业到在线工具。
- Allowed: brainstorming, locating references, surfacing counterarguments, with disclosure.
- Prohibited: submitting AI text as original evidence/data; fabricating citations.
- Privacy: do not upload sensitive or unpublished student work to online tools.
十一、差异化与包容
Differentiation and inclusion
- 提供中英关键词卡片与术语表;为英语学习者安排较短发言轮次多轮参与。
- 允许替代产出(口头+提纲/图示);为需要者提供延长准备时间。
- Provide bilingual keyword cards; allow shorter, multiple speaking turns for ELLs.
- Allow alternative artifacts (oral + outline/diagram); extend prep time if needed.
十二、产出与后续
Deliverables and follow-up
- 辩论提纲与引用清单(小组提交,1页)。
- 个人反思(200字):自身立场的变化与一项政策建议。
- 拓展任务(可选):将本案转化为500字政策备忘录或伦理影响评估简表。
- Group debate brief with references (1 page).
- Individual reflection (approx. 200 words).
- Extension (optional): 500-word policy memo or mini ethical impact assessment.
十三、引导性问题(备查)
Guiding questions (for scaffolding)
- 关键利益相关者是谁?谁承担了不成比例的风险或收益?
- 哪些危害是可预见且可缓解的?代价如何?
- 若采纳相反立场,最大担忧是什么?如何以保障措施最小化?
- Which stakeholders are most affected? Any disproportionate impacts?
- Which harms are foreseeable and mitigable? At what cost?
- If you had to adopt the opposite stance, what safeguards would minimize your top concern?
参考提示(供教师核查用,课堂不必详述)
Reference pointers (for instructor verification)
- OECD AI Principles (2019), UNESCO Recommendation on the Ethics of AI (2021)
- NIST AI Risk Management Framework 1.0 (2023)
- EU AI Act: adopted in 2024 with phased implementation
- San Francisco facial recognition ordinance (2019)
- ProPublica (2016) on COMPAS and vendor response; fairness trade-offs (e.g., Kleinberg et al., Chouldechova)
- Uber AV 2018 NTSB investigation summary
- Santa Cruz predictive policing ban (2020)
- OpenAI AI text classifier retirement (2023)
执行清单(教师速用)
Instructor quick checklist