Design a 12-week graduate-level curriculum on responsible AI governance
Context
A university department is launching a new graduate elective on responsible AI governance for Fall 2026. The professor must design a 12-week curriculum that meets accreditation standards, aligns with international computing curriculum guidelines, and progressively builds students from foundational concepts to advanced governance practice.
Before (Unstructured)
"Design a course on AI governance for graduate students."
What is missing
- × No institutional context — which department, what level of rigor?
- × No learning outcome framework specified (Bloom's, competency-based)
- × No duration, format, or assessment strategy defined
- × No curriculum design methodology — how should topics be sequenced?
- × No evaluation criteria for curriculum quality
After (MOTIVE-Structured)
As a university professor in information systems, I need a 12-week curriculum because the department is launching a new graduate elective on responsible AI governance for Fall 2026, requiring faculty review and accreditation alignment.
Deliver a complete course syllabus with weekly topics, learning objectives mapped to Bloom's taxonomy, curated reading lists, three major assignments, and assessment rubrics. Success criteria: (1) All 6 Bloom's levels represented, (2) Topics sequenced from foundations to advanced, (3) Assessment weights aligned with learning outcomes.
Use Backward Design (Wiggins & McTighe) for curriculum structure. Align with ACM/IEEE Computing Curricula 2023 and UNESCO AI Ethics Recommendation. Reference the EU AI Act for regulatory governance content.
1. Define 4-5 course-level learning outcomes using Bloom's taxonomy verbs. 2. Sequence 12 weekly topics progressing from AI ethics foundations to organizational governance implementation. 3. Design 3 major assessments (case study, policy brief, examination). 4. Curate 3-4 readings per week mixing academic papers and policy documents. If source availability is uncertain, provide alternatives.
Level: Master's. Class size: 25-30. Prerequisites: Introduction to AI or equivalent. Format: Hybrid (2h lecture + 1h seminar weekly). Assessment: Case study analysis (30%), policy brief (30%), final examination (25%), seminar participation (15%). Exclude: Technical ML implementation, undergraduate-level introductions.
Evaluate: (1) Bloom's taxonomy coverage 1-5, (2) Topic coherence and progression 1-5, (3) Assessment-outcome alignment 1-5, (4) Reading quality and diversity 1-5. Ensure all 6 Bloom's levels are represented across the 12 weeks. If any criterion < 3.5, revise the weakest area.
Output Comparison
Before Output
Week 1: Introduction to AI Ethics. Week 2: Bias in AI Systems. Week 3: Fairness and Transparency. Week 4: Privacy and Data Protection. Week 5: Regulation of AI. The course will cover various topics related to AI governance and students will write a final paper.
Show full output
After Output
Week 3: Algorithmic Fairness and Bias Mitigation. Learning Outcomes: (1) Analyze sources of bias in ML pipelines (Bloom: Analyze), (2) Evaluate fairness metrics trade-offs (Bloom: Evaluate). Readings: Barocas & Selbst (2016), EU HLEG Trustworthy AI Guidelines Ch. 3. Assessment: Case study checkpoint — identify 3 bias vectors in a provided hiring algorithm dataset.
Show full output
Evaluation Scores
Key Improvement
The Instruction component produced the largest quality impact by enforcing Bloom's taxonomy mapping per week and structured assessment design — transforming a topic list into a pedagogically sequenced, accreditation-ready curriculum.