A Guide for the Perplexed Pedagogue
There came a day, late in 2024 or early 2025, when I first understood that our approach to artificial intelligence in education was fundamentally backwards. It was not unlike the moment when humanity first realized that the Earth revolves around the Sun, rather than the other way around: a simple shift in perspective that changes everything.
The story begins with a teacher facing an impossible problem. Rob Nelson, an educator at the University of Pennsylvania, found himself confronting what appeared to be an existential threat to his profession. Large language models had arrived, and the educational establishment was responding with the predictable human reaction to technological change: panic, followed by an immediate desire to ban, control, or somehow make the new thing go away.
But Nelson possessed that rarest of qualities among educators: the willingness to question fundamental assumptions. Instead of asking “How do we prevent students from using AI?” he asked a far more interesting question: “What if we’re solving the wrong problem entirely?”
The First Law: AI Must Serve Process, Not Replace It
The first breakthrough came when Nelson realized that educational institutions had been optimizing for the wrong variables. For decades, we had focused obsessively on outcomes—grades, test scores, measurable achievements—while neglecting the very processes that make learning meaningful.
“Institutions have been structured around outputs and outcomes while neglecting processes,” Nelson observed. This insight strikes at the heart of what I would call the First Law of AI in Education: An AI tool must amplify human learning, not replace it.*
Consider Nelson’s implementation of JeepyTA, an AI tool that provides feedback on student writing. Rather than replacing human judgment, it served as what we might call a “cognitive amplifier”, enhancing the peer review process by giving students critical language they could use to provide meaningful feedback to their classmates. The AI didn’t eliminate the human element; it made it more effective.
This is not unlike the relationship between a telescope and an astronomer. The telescope doesn’t replace the astronomer’s ability to observe and understand the cosmos; it amplifies that ability, allowing the human to see farther and more clearly than would otherwise be possible.
The Second Law: Students Must Be Partners, Not Subjects
The second revelation came from Nelson’s radical decision to ask his students what they thought about AI integration before implementing it. This approach seems obvious in retrospect, yet it runs counter to the authoritarian instincts that dominate educational planning.
“That student perspective is missing from so many of our conversations,” Nelson noted. This observation leads us to the Second Law of AI in Education: An AI implementation must respect student agency and perspective, except where such respect would conflict with the First Law.
Nelson’s approach involved what we might call “pedagogical transparency.” Rather than hiding the use of AI tools or implementing them through administrative decree, he made students partners in the experimental process. He explicitly encouraged AI use while requiring reflection—a combination that transformed potential academic dishonesty into genuine learning opportunities.
The results were illuminating. Students initially worried that “using it at all was cheating,” but when given permission and structure, they engaged thoughtfully with both the possibilities and limitations of AI tools. The anxiety around cheating transformed into more productive concerns about skill development and learning effectiveness.
This partnership model addresses a fundamental asymmetry in educational technology implementation. Administrators and faculty typically have the luxury of gradual adoption and careful consideration, while students are expected to adapt immediately to whatever tools are thrust upon them. Nelson’s approach acknowledges that students are not passive recipients of educational technology but active participants who bring their own experiences and insights to the process.
The Third Law: Institutions Must Choose Courage Over Control
The most profound insight from Nelson’s experiment relates to institutional behavior under pressure. “It is forcing institutions to either double down on surveillance and control or give it up,” he observed about AI’s impact on education.
This binary choice leads us to the Third Law of AI in Education: An institution must preserve its educational mission through adaptation and trust, except where such preservation would conflict with the First or Second Laws.
The traditional institutional response to academic dishonesty has been to increase surveillance—more proctoring software, more plagiarism detection, more sophisticated monitoring systems. But this approach creates what systems theorists call a “Red Queen Effect,” an endless arms race where each new security measure prompts more sophisticated circumvention attempts.
Nelson’s alternative approach requires institutional courage: the willingness to trust students while creating structures that make that trust productive. This means accepting that some students may misuse AI tools, while focusing on creating environments where most students will use them thoughtfully.
The courage required here is not unlike that demanded of early democratic institutions. Democracy requires trusting citizens to make good decisions, even knowing that they will sometimes make poor ones. The alternative—authoritarian control—may seem safer in the short term, but ultimately undermines the very goals it claims to protect.
The Practical Implementation: A Three-Step Protocol
Based on Nelson’s experience, we can derive a practical protocol for AI integration that respects all three laws:
Step One: Listen Before You Leap Begin any AI integration by asking students about their current experiences with these tools. As Nelson discovered through coffee conversations and Zoom calls with former students, their perspectives are both more sophisticated and more practical than faculty assumptions typically allow.
Step Two: Create Safe Experimental Spaces Implement AI tools in low-stakes environments where failure is acceptable, and learning is visible. Nelson’s peer review workshops exemplify this approach—students could experiment with AI feedback while remaining in direct conversation with human peers and instructors.
Step Three: Model Intellectual Humility Perhaps most importantly, instructors must model the very uncertainty they hope to cultivate in students. “That’s something I try to cultivate and perform myself as a way of modeling what I want for my students,” Nelson explained about his willingness to be vulnerable and unsure about AI’s educational value.
The Deeper Pattern: Technology as Mirror
What emerges from Nelson’s experiment is a pattern that extends far beyond AI in education. Every significant technological shift forces us to confront fundamental questions about our values and methods. The printing press challenged oral traditions; the calculator challenged computational pedagogy; the internet challenged information gatekeeping.
In each case, the institutions that thrived were those that asked not “How do we prevent this change?” but rather “How do we harness this change to serve our core mission better?”
Nelson’s insight that students are “radically resourceful” points to a more profound truth: they are already living in the post-AI world we are trying to understand. Our choice is not whether to allow AI into education, because it is already there. The choice is whether to engage with it thoughtfully or pretend it doesn’t exist.
The Future History of Present Decisions
Looking ahead, we can predict that institutions will fall into three categories: those that ban AI and gradually become irrelevant, those that adopt it thoughtlessly and lose their educational mission, and those that integrate it carefully while preserving what matters most about human learning.
The third category will be distinguished not by their technology but by their adherence to something like our Three Laws: they will use AI to enhance rather than replace learning processes, treat students as partners rather than subjects, and choose adaptation and trust over control and surveillance.
Nelson’s experiment suggests that this third path is not only possible but more engaging for both students and faculty. His students didn’t just learn to use AI tools; they learned to think critically about the role of artificial intelligence in human learning—a skill that will serve them far better than any specific technological competency.
The story of AI in education is still being written, and we are all characters in this narrative. The question is not whether we will be affected by artificial intelligence, but whether we will be thoughtful participants in shaping that impact or passive victims of changes we refuse to understand.
In the end, Nelson’s approach offers us something more valuable than a solution to the AI problem. It provides us with a way of thinking about technological change that preserves human agency while embracing human possibility. And perhaps that is the most human response of all to our artificial creations.
The future is not something that happens to us, but something we create through the choices we make today. The laws and policies we establish will determine whether our artificial partners serve human flourishing or merely human convenience.