Education · 15 min

Design a 12-week graduate-level curriculum on responsible AI governance

Archetype: Planner Tier 3

Context

A university department is launching a new graduate elective on responsible AI governance for Fall 2026. The professor must design a 12-week curriculum that meets accreditation standards, aligns with international computing curriculum guidelines, and progressively builds students from foundational concepts to advanced governance practice.

Before (Unstructured)

"Design a course on AI governance for graduate students."

What is missing

  • × No institutional context — which department, what level of rigor?
  • × No learning outcome framework specified (Bloom's, competency-based)
  • × No duration, format, or assessment strategy defined
  • × No curriculum design methodology — how should topics be sequenced?
  • × No evaluation criteria for curriculum quality

After (MOTIVE-Structured)

[M] Motivation

As a university professor in information systems, I need a 12-week curriculum because the department is launching a new graduate elective on responsible AI governance for Fall 2026, requiring faculty review and accreditation alignment.

[O] Object

Deliver a complete course syllabus with weekly topics, learning objectives mapped to Bloom's taxonomy, curated reading lists, three major assignments, and assessment rubrics. Success criteria: (1) All 6 Bloom's levels represented, (2) Topics sequenced from foundations to advanced, (3) Assessment weights aligned with learning outcomes.

[T] Tool

Use Backward Design (Wiggins & McTighe) for curriculum structure. Align with ACM/IEEE Computing Curricula 2023 and UNESCO AI Ethics Recommendation. Reference the EU AI Act for regulatory governance content.

[I] Instruction

1. Define 4-5 course-level learning outcomes using Bloom's taxonomy verbs. 2. Sequence 12 weekly topics progressing from AI ethics foundations to organizational governance implementation. 3. Design 3 major assessments (case study, policy brief, examination). 4. Curate 3-4 readings per week mixing academic papers and policy documents. If source availability is uncertain, provide alternatives.

[V] Variables

Level: Master's. Class size: 25-30. Prerequisites: Introduction to AI or equivalent. Format: Hybrid (2h lecture + 1h seminar weekly). Assessment: Case study analysis (30%), policy brief (30%), final examination (25%), seminar participation (15%). Exclude: Technical ML implementation, undergraduate-level introductions.

[E] Evaluation

Evaluate: (1) Bloom's taxonomy coverage 1-5, (2) Topic coherence and progression 1-5, (3) Assessment-outcome alignment 1-5, (4) Reading quality and diversity 1-5. Ensure all 6 Bloom's levels are represented across the 12 weeks. If any criterion < 3.5, revise the weakest area.

Output Comparison

Before Output

Week 1: Introduction to AI Ethics. Week 2: Bias in AI Systems. Week 3: Fairness and Transparency. Week 4: Privacy and Data Protection. Week 5: Regulation of AI. The course will cover various topics related to AI governance and students will write a final paper.

Show full output
Week 1: Introduction to AI Ethics. This week introduces students to the basic concepts of AI ethics and why it matters in today's world. We'll discuss some key ethical frameworks and how they apply to artificial intelligence. Week 2: Bias in AI Systems. We'll look at how bias can appear in AI systems and discuss some notable examples. Students will learn about different types of bias and why they're problematic. Week 3: Fairness and Transparency. This week covers the concepts of fairness and transparency in AI. We'll discuss what it means for an algorithm to be fair and why transparency is important for trust. Week 4: Privacy and Data Protection. Students will learn about privacy concerns related to AI systems and data protection regulations. We'll discuss GDPR and other privacy frameworks. Week 5: Regulation of AI. We'll look at current and proposed regulations around AI. Students will learn about different regulatory approaches from around the world. The course will cover various additional topics in the remaining weeks, and students will write a final paper on a topic of their choice.

After Output

Week 3: Algorithmic Fairness and Bias Mitigation. Learning Outcomes: (1) Analyze sources of bias in ML pipelines (Bloom: Analyze), (2) Evaluate fairness metrics trade-offs (Bloom: Evaluate). Readings: Barocas & Selbst (2016), EU HLEG Trustworthy AI Guidelines Ch. 3. Assessment: Case study checkpoint — identify 3 bias vectors in a provided hiring algorithm dataset.

Show full output
**Course-Level Learning Outcomes (Backward Design)** Upon completion, students will: (1) Analyze ethical tensions in AI deployment using established philosophical frameworks (Bloom: Analyze), (2) Evaluate competing governance frameworks across jurisdictions (Bloom: Evaluate), (3) Create an original AI governance policy brief for a real organization (Bloom: Create), (4) Apply risk assessment methodologies to AI system auditing (Bloom: Apply). **Week 3: Algorithmic Fairness and Bias Mitigation** Learning Outcomes: (1) Analyze sources of bias in ML pipelines including training data, feature selection, and evaluation metrics (Bloom: Analyze), (2) Evaluate trade-offs between competing fairness metrics such as demographic parity, equalized odds, and individual fairness (Bloom: Evaluate). Readings: Barocas & Selbst (2016) 'Big Data's Disparate Impact'; EU HLEG Trustworthy AI Guidelines Ch. 3; Chouldechova (2017) on impossibility theorems. Seminar activity: Hands-on audit of a provided hiring algorithm dataset using the Aequitas toolkit. Assessment checkpoint: Submit case study analysis identifying 3 bias vectors, proposed mitigation strategies, and fairness metric justification (feeds into 30% case study grade). **Week 7: The EU AI Act — Risk-Based Regulatory Framework** Learning Outcomes: (1) Classify AI systems by risk tier per EU AI Act Annexes (Bloom: Apply), (2) Evaluate compliance requirements for high-risk AI systems (Bloom: Evaluate). Readings: EU AI Act Articles 6-9, 52; Veale & Borgesius (2021) on risk classification challenges. Seminar: Moot regulatory hearing simulation — students argue for/against classification of a facial recognition system. **Assessment Architecture (Aligned to Bloom's Progression)** Assignment 1 — Case Study Analysis (30%, Weeks 3-5): Progressive case study with 3 checkpoints, analyzing bias, privacy, and accountability in a single AI system. Assignment 2 — Policy Brief (30%, Weeks 8-10): Original 3,000-word governance policy brief for a real organization, requiring stakeholder analysis and regulatory mapping. Final Examination (25%, Week 12): Scenario-based exam with tiered questions spanning Remember through Create. Seminar Participation (15%): Weekly discussion contributions assessed via structured rubric covering preparation, argumentation, and peer engagement.

Evaluation Scores

Average Before
1.8/5
Average After
4.8/5
Improvement
+171%
Goal Alignment25MOContext Appropriateness15MTVClarity35IVSystematic Iteration14E
Before MOTIVE (1-5)
After MOTIVE (1-5)

Key Improvement

The Instruction component produced the largest quality impact by enforcing Bloom's taxonomy mapping per week and structured assessment design — transforming a topic list into a pedagogically sequenced, accreditation-ready curriculum.

Next
Write an Architecture Decision Record for migrating from REST to GraphQL