热门角色不仅是灵感来源,更是你的效率助手。通过精挑细选的角色提示词,你可以快速生成高质量内容、提升创作灵感,并找到最契合你需求的解决方案。让创作更轻松,让价值更直接!
我们根据不同用户需求,持续更新角色库,让你总能找到合适的灵感入口。
为作业或项目类型制定评估标准的模板,精准且专业。
论点陈述 为确保小组项目评估的公正性、透明度与学习导向,应采用与课程学习成果“对齐”的分析型评分量规(analytic rubric),并辅以同伴与自评以区分个体贡献。此框架可提高评分的信度与可解释性,促进形成性反馈与自我调节学习,且便于在学习管理系统中实施与追踪(Biggs & Tang, 2011; Jonsson & Svingby, 2007; Panadero & Jonsson, 2013; Topping, 2009)。 一、适用范围与提交证据 - 适用对象:本科或研究生层次的基于问题/项目的小组任务(线上、混合或线下)。 - 证据类型: - 成品:数字化作品/研究报告/原型与说明文档。 - 过程证据:项目计划、里程碑记录、版本控制与协作日志(如LMS活动记录、文档修订史)、会议纪要。 - 反思:小组与个人反思(方法选择、问题迭代、伦理与数据使用审视)。 - 贡献证明:同伴互评与自评,必要时附工作分工矩阵与佐证材料。 二、评分量规(四级表现描述与权重) 说明:四级标准为“卓越(4)–熟练(3)–基本达标(2)–需改进(1)”。请依据课程学习成果微调措辞与权重。总分100%。 1) 问题界定与学习目标对齐(10%) - 卓越:清晰界定真实而复杂的问题情境,目标与课程成果构成一致性强的“建构性对齐”;界定包含边界条件与受众需求并据此设定评价指标。 - 熟练:问题与目标较清晰,与课程成果基本对齐;能识别主要约束与受众。 - 基本达标:问题较笼统,目标与成果对齐不足;约束与受众分析表面化。 - 需改进:问题模糊或偏离任务;缺乏明确目标与受众界定。 2) 证据与方法论质量(20%) - 卓越:系统检索并批判性整合高质量证据;方法选择有理论依据,流程可复现;数据处理合规(含隐私、伦理与许可);局限性反思到位。 - 熟练:证据充分且可信;方法基本合适并清晰报告;伦理注意事项基本到位。 - 基本达标:证据来源与方法选择有欠充分;报告不完整;伦理与合规考虑不足。 - 需改进:证据薄弱或不当;方法不可追踪;忽略基本伦理/合规。 3) 解决方案/产出质量与创新性(20%) - 卓越:提出可行、有效且具有创新性的方案或产出,设计决策与证据紧密呼应;表现出跨情境迁移潜力与可扩展性。 - 熟练:方案可行、逻辑合理;有一定新意;与证据存在良好对应。 - 基本达标:方案基本可行但证据支撑薄弱或创新性有限。 - 需改进:方案不可行或与证据脱节;缺乏清晰的设计逻辑。 4) 技术应用与可访问性(15%) - 卓越:恰当运用数字工具支撑协作、创作与可视化;产出符合可访问性良好实践(如为图表/媒体提供替代文本、颜色对比适宜、结构清晰),并对数据与版权进行正确标注与许可;技术选择体现效率与可持续性。 - 熟练:技术选用恰当且稳定;基本满足可访问性与版权要求;对受众友好。 - 基本达标:技术实现基本完成;可访问性与版权合规存在小缺口;可用性一般。 - 需改进:技术实现不稳定或选择不当;可访问性/版权忽略明显;影响使用与传播。 (注:可访问性参照W3C《WCAG 2.1》核心原则。) 5) 协作过程与个体贡献(含同伴互评)(20%) - 卓越:有证据表明高效合作(明确分工、互补技能、冲突管理、准时交付);同伴互评与过程数据一致,个体贡献均衡且有专业成长表现。 - 熟练:分工与协作基本顺畅;个体贡献清晰,偶有不均衡但能调整。 - 基本达标:协作组织松散;个体贡献不均衡且缺乏改进证据。 - 需改进:协作失效或搭便车现象明显;缺乏个体贡献证明。 (建议:同伴互评经标定后计入该维度分值的一部分,以增强区分度。) 6) 沟通与多模态呈现(15%) - 卓越:口头、书面与可视化表达精准、结构严谨,符合学术/行业体裁规范;受众适配明确,叙事连贯并嵌入数据证据;引用与致谢准确,格式规范。 - 熟练:表达清晰,结构合理;引用基本规范,存在少量格式或受众适配问题。 - 基本达标:表达基本可理解;结构与引用较为混乱。 - 需改进:表达不清或逻辑断裂;引用不当或存在学术诚信风险。 三、评分程序与权重建议 - 总评分构成: - 教师基于量规的评分:70%(确保与学习成果对齐并提供形成性反馈)。 - 同伴互评:20%(用于校准个体贡献与促进元认知;在评分前进行标定训练与参考样例练习)。 - 自评与反思:10%(聚焦目标达成、证据使用与改进计划)。 - 个体成绩调整:在小组总分基础上,依据同伴互评与过程证据对个人分数进行比例性上调/下调(确保透明公示规则与申诉通道)。 四、实施与质量保障(基于证据的建议) - 评分信度提升: - 采用分析型量规并在评分前开展评分者校准,使用锚定样例与对分歧条目复判,可显著提升一致性(Jonsson & Svingby, 2007)。 - 提供明确的行为性描述与可观察证据,减少主观性(Brookhart, 2013)。 - 有效性与学习导向: - 通过“建构性对齐”确保任务、教学活动与评估一致,提升学习成效(Biggs & Tang, 2011)。 - 将量规用于形成性反馈与自评,可促进自我调节学习与成绩提升(Panadero & Jonsson, 2013)。 - 同伴评估的可用性与公正性: - 同伴评分与教师评分具有中等一致性,配合标定训练与多指标汇总可提升公正性(Falchikov & Goldfinch, 2000; Topping, 2009)。 - 对团队效能与贡献差异的识别可结合结构化同伴量表与过程证据(Oakley et al., 2004)。 - 技术支持与证据采集: - 在LMS中启用量规工具与同伴评审工作流,收集过程性数据(提交时间线、修订历史、讨论参与度),以支撑可追溯与再评分。 - 可访问性遵循WCAG 2.1的可感知、可操作、可理解与稳健原则,确保不同学习者的可及性(W3C, 2018)。 五、学术诚信与伦理 - 明确数据来源、版权与许可条款;对生成式与辅助性技术的使用进行披露与引用管理。 - 人类参与的调查/访谈须遵循相应伦理审查与告知同意规范。 参考文献(APA第7版) - Association of American Colleges and Universities. (2009). Teamwork VALUE rubric. AAC&U. - Biggs, J., & Tang, C. (2011). Teaching for quality learning at university (4th ed.). Open University Press. - Brookhart, S. M. (2013). How to create and use rubrics for formative assessment and grading. ASCD. - Falchikov, N., & Goldfinch, J. (2000). Student peer assessment in higher education: A meta-analysis comparing peer and teacher marks. Review of Educational Research, 70(3), 287–322. - Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2(2), 130–144. - Oakley, B., Felder, R. M., Brent, R., & Elhajj, I. (2004). Turning student groups into effective teams. Journal of Student Centered Learning, 2(1), 9–34. - Panadero, E., & Jonsson, A. (2013). The use of scoring rubrics for formative assessment purposes revisited: A review. Educational Research Review, 9, 129–144. - Topping, K. J. (2009). Peer assessment. Theory Into Practice, 48(1), 20–27. - World Wide Web Consortium. (2018). Web content accessibility guidelines (WCAG) 2.1. W3C. 使用说明 - 将上述量规嵌入课程Syllabus与LMS评估任务页面,提供样例作品与逐级对照说明。 - 在第一次小组任务前进行10–15分钟评分标准解读与同伴评审演练。 - 在项目中期提供一次基于量规的形成性反馈,以便迭代改进。
Analytic rubric (basic) for evaluating technology‑enhanced interactive tasks Purpose and scope This rubric is designed for evaluating learner performance in technology‑enabled interactive tasks (e.g., collaborative documents, discussion forums, peer‑review activities, simulations, branching scenarios, or interactive case studies). The criteria are grounded in evidence on effective feedback and assessment, quality of interaction, multimodal communication, and accessibility. The rubric is intended for formative and summative use and should be aligned to explicit learning outcomes for the course or module. Performance levels - 4 = Exemplary - 3 = Proficient - 2 = Developing - 1 = Beginning Criteria and performance descriptors 1) Alignment with learning outcomes and task purpose - 4: Consistently demonstrates targeted outcomes; all contributions explicitly address task purpose and success criteria with accurate, well‑chosen evidence. - 3: Demonstrates targeted outcomes; most contributions clearly address task purpose with appropriate evidence. - 2: Partially demonstrates targeted outcomes; contributions inconsistently address task purpose or rely on limited/misaligned evidence. - 1: Does not demonstrate targeted outcomes; contributions are off‑task or unsupported. 2) Quality of interaction and knowledge building (quality of idea development, responsiveness, and advancement of shared understanding) - 4: Initiates and sustains high‑level interactive moves (questioning, elaboration, synthesis); builds on peers’ ideas to advance collective understanding; prompts further inquiry. - 3: Responds constructively to peers and extends discussion with relevant elaboration or clarification; occasional synthesis. - 2: Limited responsiveness (e.g., agree/disagree without elaboration); minimal advancement of group thinking. - 1: Isolated or perfunctory posts/edits; no evidence of engagement with peers or co‑construction. 3) Disciplinary accuracy and application (accuracy, reasoning, transfer to authentic problems) - 4: Disciplinary concepts are accurate and applied appropriately to authentic or novel contexts; reasoning is explicit and well‑justified. - 3: Concepts are accurate with minor lapses; application is appropriate to familiar contexts; reasoning is mostly clear. - 2: Noticeable inaccuracies or superficial application; reasoning is incomplete or partially flawed. - 1: Substantial inaccuracies; reasoning and application are absent or incorrect. 4) Multimodal communication and learning‑science‑informed design (clarity, coherence, and effective use of media consistent with multimedia principles) - 4: Text, visuals, audio, and/or interactivity are selected and integrated to reduce extraneous load, highlight essential material, and support generative processing (e.g., signaling, coherence, redundancy avoided); messages are concise and coherent. - 3: Media choices mostly support understanding; minor issues with coherence or unnecessary elements; overall clarity maintained. - 2: Media choices intermittently hinder clarity (e.g., clutter, redundant narration and text, weak signaling); coherence is uneven. - 1: Media use impedes understanding (e.g., distracting elements, poor organization, unreadable assets). 5) Feedback and reflection for improvement (quality of peer/self‑feedback and evidence of revision) - 4: Provides specific, criteria‑referenced, actionable feedback; demonstrates incorporation of feedback through substantive revisions; articulates reflective insights about learning and next steps. - 3: Provides mostly specific and constructive feedback; makes relevant revisions; reflection identifies some strengths and areas for growth. - 2: Feedback is general or focuses on praise without guidance; limited or surface‑level revisions; reflection is descriptive rather than analytic. - 1: Little or no feedback provided; no revisions evident; reflection absent. 6) Technical quality, accessibility, and responsible use (functionality, usability, accessibility, and ethical/secure practice) - 4: Product functions as intended across devices; navigation is clear; adheres to accessibility best practices (e.g., headings, alt text, color contrast, captions, keyboard access) and professional/ethical norms (privacy, citation, licensing). - 3: Minor technical or accessibility issues that do not impede use; generally responsible and ethical use of tools and content. - 2: Recurrent technical or accessibility issues that hinder use; inconsistent adherence to ethical or citation practices. - 1: Major technical failures or inaccessible design; inappropriate or unsafe use (e.g., disclosing personal data, plagiarism). Default weighting (adjust to outcomes) - Alignment with outcomes and purpose: 20% - Quality of interaction and knowledge building: 20% - Disciplinary accuracy and application: 25% - Multimodal communication and design: 15% - Feedback and reflection: 10% - Technical quality, accessibility, and responsible use: 10% Scoring and implementation guidance - Constructive alignment: Map each criterion to specific learning outcomes and success criteria made transparent to learners in advance (Brookhart, 2013). - Calibration for reliability: Use annotated exemplars for each level; conduct brief rater calibration; when high‑stakes, double‑mark a sample and check agreement before full scoring. - Evidence sources: Evaluate artifacts directly (posts/edits with timestamps, version histories, comment threads, revision logs, media files) to ensure claims about interaction and revision are verifiable. - Feedback workflow: Return rubric scores with criterion‑level comments linked to improvement actions; allow a revision window to leverage the feedback effect (Hattie & Timperley, 2007; Nicol & Macfarlane‑Dick, 2006). - Accessibility check: Apply WCAG 2.2 success criteria appropriate to the task modality (e.g., captions for video, alt text for images, sufficient color contrast, keyboard operability) and align with UDL principles to reduce unnecessary barriers (CAST, 2018; W3C, 2023). Rationale and evidence base - Interactive quality matters: Interaction that is constructive and genuinely interactive (co‑elaboration and contingency) is associated with stronger learning gains than passive or superficial participation (Chi & Wylie, 2014; Garrison, Anderson, & Archer, 2000). - Clear criteria and actionable feedback: Transparent criteria and descriptive feedback improve learning and support self‑regulation; rubrics should articulate performance levels tied to outcomes (Brookhart, 2013; Hattie & Timperley, 2007; Nicol & Macfarlane‑Dick, 2006). - Multimedia and cognitive load: Media should be selected and integrated to reduce extraneous load and enhance generative processing (Mayer, 2021). - Inclusive design: Applying UDL guidelines and WCAG improves accessibility and participation without diluting rigor (CAST, 2018; W3C, 2023). References (APA 7th) Brookhart, S. M. (2013). How to create and use rubrics for formative assessment and grading. ASCD. CAST. (2018). Universal Design for Learning Guidelines version 2.2. https://udlguidelines.cast.org Chi, M. T. H., & Wylie, R. (2014). The ICAP framework: Linking cognitive engagement to active learning outcomes. Educational Psychologist, 49(4), 219–243. Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text‑based environment: Computer conferencing in higher education. The Internet and Higher Education, 2(2–3), 87–105. Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. Mayer, R. E. (2021). Multimedia learning (3rd ed.). Cambridge University Press. World Wide Web Consortium (W3C). (2023). Web Content Accessibility Guidelines (WCAG) 2.2. https://www.w3.org/TR/WCAG22/
快速搭建与课程目标一致的作业评分量表,统一助教评分口径,支持期末集中批改与结果汇总解释。
制定年级/学科统一评价模板,覆盖项目式学习、探究报告与展示,便于家校沟通与学生自评。
为新课上线生成标准化评价表与评语示例,匹配平台互动任务,提升导师反馈效率与学员体验。
为岗位技能实操与团队项目制定打分规范,明确权重与绩效等级,支撑跨部门评审与认证。
创建论文/开题/课题中期检查的评估标准,明确证据要求与引用规范,减少重复解释和争议。
快速输出评估框架与报告骨架,针对不同学校与课程定制维度,缩短交付周期,提升专业形象。
用最少的输入,迅速产出“可直接使用”的评估标准模板。适用于作业、课程项目、实践任务与展示汇报等场景,自动给出评价维度、等级描述、评分权重与示例证据,外加可直接复用的高质量反馈语句。支持一键切换输出语言与学术写作风格,突出基于证据、结构清晰、引用规范的表达。结合教学目标与学习平台环境,给出可落地的技术增强教学建议,帮助教师与教研团队在几分钟内完成从“无到有”的量表搭建,显著降低主观偏差,提升评分一致性与透明度,沉淀可复用的组织级评价标准库,最终提升学生体验与课程质量。
将模板生成的提示词复制粘贴到您常用的 Chat 应用(如 ChatGPT、Claude 等),即可直接对话使用,无需额外开发。适合个人快速体验和轻量使用场景。
把提示词模板转化为 API,您的程序可任意修改模板参数,通过接口直接调用,轻松实现自动化与批量处理。适合开发者集成与业务系统嵌入。
在 MCP client 中配置对应的 server 地址,让您的 AI 应用自动调用提示词模板。适合高级用户和团队协作,让提示词在不同 AI 工具间无缝衔接。
免费获取高级提示词-优惠即将到期