My work focuses on strengthening assessment, applying learning science principles, and exploring how AI can be integrated in ways that enhance human judgment and long-term student success.

We are entering a moment where knowing how to think matters more than memorizing what to think. AI makes information abundant. What remains scarce, and increasingly valuable, is judgment.

My work centers on redesigning learning systems for that reality. That means grounding decisions in learning science, building assessments that measure reasoning rather than recall, and integrating AI in ways that support cognitive growth instead of dependency.

Strong learning systems are not accidental. They are intentionally designed. They require students to demonstrate what they know, refine their thinking through meaningful feedback, apply knowledge in new contexts, and operate at a level that stretches their capability. They define clearly what students must be able to do independently, even when AI tools are available. And they include governance structures that protect rigor as innovation accelerates.

At the center of my work is a commitment to exploring how universities, certification bodies, and learning organizations can measure real reasoning, apply learning science in operational ways, integrate AI thoughtfully, and build governance structures that protect academic and professional standards at scale.

Hello! I’m Michelle Marlowe.

Education is changing.
The question isn’t whether AI belongs, but whether we will design learning systems that
strengthen critical thinking as it evolves.

Building What’s Next

Most institutions already have strong faculty, instructional designers, researchers, and product teams. What is often missing is alignment. AI affects assessment validity, definitions of competence, governance, and long-term skill development, yet those conversations frequently happen in silos.

My work sits at the intersection of learning science, assessment architecture, and AI strategy. It focuses on clarifying shared definitions of competence and redesigning systems so they measure reasoning, protect rigor, and strengthen critical thinking in an AI-enabled world.

This platform is not a consulting practice. I’m not selling services. It exists as a space for ideas, dialogue, and shared thinking. I welcome conversation with leaders, faculty, and teams who are wrestling with these questions and want to engage thoughtfully about what comes next.

  • Assessment is where standards live, and where they quietly erode if not designed with intention.

    As AI reshapes how students access information and produce work, assessment design becomes central to education reform. Institutions must reconsider whether their systems measure reasoning, judgment, and transfer, or simply reward surface recall and task completion.

    This requires examining evaluation models, strengthening rubric design, developing AI-aware assessment practices, and redefining validity in environments where generative tools are widely available.

    If education is to maintain credibility in an AI-enabled world, assessment systems must reflect real-world competence and clearly define what students can do independently. Strengthening assessment is not a small issue; it is foundational to preserving academic rigor.

  • Learning science gives clear direction on what builds durable skill: retrieval, feedback, calibrated challenge, and meaningful application.

    Yet these principles are often discussed in theory more than applied in practice. Education systems and learning products do not always reflect what cognitive research tells us about how people retain and transfer knowledge.

    Applying learning science means rethinking curriculum structures, aligning instruction with evidence from cognitive research, designing product features that support long-term retention, and measuring mastery rather than simple engagement.

    The goal is straightforward: learning environments that produce lasting capability, not temporary performance.

  • AI adoption without structure can lead to inconsistency and risk.

    As institutions integrate generative tools into teaching, assessment, and operations, the question is not whether to use AI, but how to use it responsibly. Without clear standards, AI can weaken critical thinking, blur definitions of competence, and create governance gaps.

    Intentional integration requires defining AI literacy expectations, establishing review and oversight structures, designing human-in-the-loop workflows, and aligning AI use with institutional values and academic standards.

    Responsible integration is not about slowing innovation. It is about ensuring that innovation strengthens rigor, supports equity, and contributes to long-term student success.