
Something odd happens when you teach students about algorithmic bias. They get it. They really do. Show them how facial recognition systems fail on darker skin tones, explain the feedback loops that turn hiring algorithms into bias amplification machines, and walk them through how recommendation systems can radicalize, and they understand. They can write essays about it. They ace the tests.
Then they go right back to using the same systems they just critiqued, unchanged in their behavior, unchallenged in their assumptions about what’s possible or who gets to decide.
This isn’t a failure of comprehension. It’s a failure to recognize that comprehension without authority is just spectating. We’ve built AI literacy programs that treat ethics like a subject to study rather than a capacity to exercise, and the result is a generation fluent in critique but uncertain they have permission to act on it.
The problem has a structure. We can call its two parts ethical awareness and ethical agency, though those terms make it sound simpler than it is.
What Awareness Does (And Doesn’t Do)
Start with awareness. When we say someone has ethical awareness around AI, we mean they’ve learned to see these systems as human projects that carry human consequences. Not neutral tools. Not the inevitable forces of progress. Human projects, which means they inherit all the biases, shortcuts, and power dynamics that humans bring to any project.
A student with ethical awareness can look at an AI tutoring system and notice things. The feedback sounds weirdly generic, like it was written by someone who’s never actually talked to a struggling student. The questions it asks steer toward narrow, testable knowledge rather than genuine understanding. It treats all learning difficulties the same way, ignoring whether you’re bored, confused, or dealing with something in your life that makes focusing hard right now.
That noticing is valuable. You can’t fix what you don’t see. But here’s the thing about awareness: it’s fundamentally observational. It puts you in the position of someone watching a play, able to critique the plot and staging but not able to walk onstage and change the script. You see the problem. You might even understand why the problem exists, trace it back to training data or business incentives or the blind spots of the designers. But seeing doesn’t automatically translate to doing.
This is where much AI literacy education ends. Students learn to identify bias in datasets. They analyze case studies of algorithmic discrimination. They discuss privacy violations, data harvesting, and the environmental costs of training large language models. All of this is important. None of it is sufficient.
Because you can have a classroom full of students who could give you a sophisticated analysis of why their school’s surveillance system is invasive and counterproductive, and those same students will still carry their monitored devices everywhere, will still accept that emails get scanned and locations get tracked, and still shrug when you ask them why they don’t push back. They’ll tell you they have no choice. They’ll tell you that’s just how things work now.
They have awareness. What they lack is agency.
The Architecture of Agency
Agency is different. It’s not about knowing that something is wrong. It’s about believing you have the standing to do something about it, plus the practical capacity to act on that belief.
Let’s be specific about what this looks like. Take that same AI tutoring system. A student with awareness notices the generic feedback. A student with agency does something: maybe they stop using it for actual learning and treat it as a baseline to improve on with their own thinking. Maybe they document the problems and bring them to a teacher. Maybe they look for better tools, or they advocate to their school administration that this particular system isn’t worth the money. Maybe they just refuse to engage with it at all, accepting whatever grade consequences follow because they’ve decided their learning is more important than the metrics the system tracks.
All of those responses share something. They’re based on the assumption that the student has authority in the situation. Not complete authority—schools make decisions, budgets constrain choices, district policies limit options. But some authority. Enough to say “this tool doesn’t serve my learning, so I’m going to use it differently or not at all.” Enough to believe that their assessment of the tool’s value matters, that their preferences count, that they’re not just obligated to accept whatever technology gets deployed around them.
This sense of authority doesn’t arise naturally from awareness. You can understand every technical and ethical dimension of how an AI system works and still feel completely powerless to affect it. The connection between knowing and acting isn’t automatic. It has to be built, and building it requires different kinds of experience than building awareness does.
The Pedagogy Gap
Here’s where most AI literacy programs have a serious problem. We know how to teach awareness. The tools are familiar: lectures, readings, discussions, case studies. You present information about how AI systems function and fail. You analyze real-world examples. You ask students to think critically about implications. It’s not trivial work, but it fits within conventional educational structures. A good teacher with good materials can reliably help students develop sophisticated awareness of AI’s ethical dimensions.
Agency doesn’t work that way. You can’t lecture someone into feeling empowered. You can’t assign readings that will make a student believe they have authority over technology. Case studies of others making ethical choices might be inspiring, but they’re still one step removed from the experience of making such choices yourself, under real constraints and with actual consequences.
Agency develops through practice, and practice requires situations with real stakes. Not hypothetical ethics problems. Not thought experiments about autonomous vehicles choosing who dies in accidents. Real choices in contexts that matter to the learner, where their decision affects their learning, their privacy, their relationships, and their sense of themselves.
Consider what this might look like. A class is working on a community research project about local water quality. They could use an AI tool to help analyze data from water samples. The tool is fast. It produces professional-looking visualizations. But—and here’s where it gets interesting—it requires uploading the data to a commercial platform where the company will own it and potentially use it to train future models.
Now there’s a real choice. Use the tool and get impressive results quickly, but give away data that belongs to the community and might be used in ways you can’t predict or control. Or find other methods that keep the data local but take more time and produce less polished outputs. There’s no clear right answer. There are tradeoffs that have to be negotiated with project partners, with community members whose water you’re testing, and with the realities of project deadlines.
A student working through that decision is building agency. They’re practicing the judgment calls that ethical AI use requires in real contexts. They’re learning that they get to weigh efficiency against privacy, convenience against control. They’re developing the confidence to make those calls and defend them.
This is completely different from analyzing a case study where someone else made a similar choice. The case study builds awareness. The lived experience builds agency.
The Domain Problem
But there’s a complication: agency seems to be context-dependent in ways that awareness isn’t.
A student might develop strong agency around surveillance technology. They understand the privacy implications, feel confident in refusing to use apps that harvest too much data, and advocate to their peers about alternatives. Put that same student in front of an AI writing tool, and all that agency disappears. They’ll accept whatever the system generates, barely editing it, not thinking critically about whether the output reflects their actual thinking or just statistically probable next words.
Why? Same student, same general technology category (AI systems with ethical implications), but a completely different response.
Part of it is familiarity. Surveillance feels intrusive in an immediate, visceral way. You can feel yourself being watched. AI writing assistance feels helpful. It’s solving a problem you have (writing is hard, deadlines are real), and the ethical issues are more abstract. Is it really a problem if you use AI to help draft something you’ll revise anyway? Where’s the line between assistance and replacement?
Part of it is about what’s normalized. Students have been told for years to be careful about privacy, to think about who’s tracking them. That message has penetrated enough that many teenagers are more privacy-conscious than their parents. But AI writing tools are new; tech companies and even some educators are actively promoting them, and there’s no established cultural norm yet about what counts as appropriate use.
And part of it is just that different technologies raise different questions. The skills you need to evaluate a surveillance system don’t automatically transfer to evaluating a generative model. The arguments you’d make about privacy don’t map cleanly onto arguments about intellectual authenticity or the development of writing ability.
This creates a pedagogical challenge. Do we need to teach agency separately for every category of AI application? That’s not sustainable. The technology is moving too fast, and AI is being deployed across too many domains.
The alternative is to look for transferable dispositions; i.e., habits of thought and patterns of response that apply across different contexts. Things like: automatically asking who benefits from this system and who bears the costs. Defaulting to skepticism about claims that a technology is neutral or inevitable. Regularly checking whether a tool is serving your purposes or shaping them. Being willing to sacrifice convenience when it conflicts with values you care about.
These dispositions might transfer across domains better than specific technical knowledge. A student who’s learned to ask “who owns my data and what will they do with it?” when evaluating a social media app might think to ask “who owns my words and what will they learn from them?” when evaluating an AI writing assistant.
But dispositions alone aren’t enough either. You also need domain-specific knowledge. The questions to ask about facial recognition aren’t identical to those for content recommendation algorithms. The failure modes, stakeholders, and potential harms differ.
So maybe the answer is both: transferable dispositions that create a general stance of critical engagement, plus domain-specific knowledge that helps you recognize particular risks and opportunities. The disposition tells you to look for problems. The knowledge tells you what problems to look for.
Power and Its Limits
Now comes the hard part. Everything so far has treated agency as if it’s primarily about individual capacity and choice. Build the right skills, develop the right dispositions, gain enough knowledge, and you’ll be empowered to make ethical decisions about AI use.
Except power doesn’t work that way. Agency isn’t just about what you’re capable of choosing. It’s about what you’re permitted to choose, and who gets to decide what your options are.
A student might have sophisticated awareness of AI ethics and a well-developed sense of personal agency, yet still be forced to use systems they find objectionable because they are institutionally mandated. Your school requires you to use a particular learning management system that tracks everything you do. Your employer requires you to use productivity monitoring software. Your city deploys facial recognition in public spaces. In each case, you might understand exactly what’s wrong with the system and want desperately to refuse it, but refusal carries consequences you can’t afford.
This is the political dimension of agency that most AI literacy programs ignore. They frame ethical choice as an individual responsibility, which is accurate but incomplete. Individual choices matter, but they happen within structures that constrain what’s choosable.
A meaningful concept of ethical agency has to include collective action, policy advocacy, and institutional transformation. It’s not enough for students to learn that they can refuse AI tools in their personal work if they have no voice in what tools their school adopts. It’s not enough to develop individual critical thinking skills if the systems surrounding you don’t have mechanisms for criticism to affect decisions.
Think about what this means practically. Students need to learn not just how to evaluate AI systems but how to participate in governance structures where decisions about AI adoption are made. They need practice in collective decision-making, coalition-building, and making arguments that can persuade people in institutional power. They need to understand policy processes, budget constraints, and the difference between advocating for change and being invited to perform diversity in a process where the real decisions have already been made.
This gets us into complicated territory because schools aren’t democracies. Students don’t have equal voting power with administrators on technology adoption. But there’s a range between tokenism and genuine voice, and most students currently operate close to the tokenism end. Their feedback about educational technology is rarely solicited, more rarely taken seriously, and seldom decisive.
If we’re serious about building ethical agency, that has to change. Students need authentic opportunities to shape the technological environment in which they learn. Not just expressing preferences but making actual decisions, at least about some things. Even if it’s just within a single classroom or a single project, there needs to be spaces where student judgment about AI use carries real weight.
Because here’s what happens when there aren’t such spaces: students internalize the message that their ethical concerns don’t matter, that technology decisions are someone else’s responsibility, that their job is to adapt to whatever systems get deployed. All the awareness in the world won’t overcome that learned helplessness. You end up with people who can articulate sophisticated critiques of AI but never imagine those critiques could change anything.
The Developmental Question
Does any of this map onto age or educational level? Not as neatly as we might hope.
A five-year-old who won’t talk to Alexa because “it’s listening to us” has a form of agency, even if their awareness is limited and their reasoning is simple. They’ve decided they don’t like being monitored, and they’re acting on that feeling. It’s not sophisticated. It might be based on an incomplete understanding. But it’s real agency in the sense that matters: they’ve exercised choice about whether to engage with a technology.
Meanwhile, many adults with graduate degrees and extensive knowledge of AI’s societal implications exhibit almost no agency. They use whatever tools their workplace provides, accept whatever recommendations their devices make, and never question whether the convenience is worth the cost. They have awareness. They lack the disposition, confidence, or institutional support to act on it.
So age doesn’t determine capacity for agency. But developmental patterns likely exist in both awareness and agency, and thinking about them might help design better learning experiences.
Early awareness tends to be concrete and personal. This app crashes a lot, which frustrates me. This game is designed to keep me playing even when I’m no longer having fun. This AI gives me wrong answers when I ask it math questions. These are direct experiences of technology failing or being manipulative, and they’re accessible to young learners.
More sophisticated awareness recognizes patterns and systems. Not just “this app is annoying,” but “apps are designed to capture attention for advertising revenue.” Not just “this algorithm gave a biased result” but “algorithms trained on historical data will reproduce historical biases unless explicitly corrected for.” This kind of awareness requires abstract thinking and the ability to connect particular instances to general structures.
The most sophisticated awareness might include historical and political dimensions. Understanding how specific technologies emerged from specific economic incentives and regulatory environments. Recognizing that what counts as “bias” or “fair” is itself contested and depends on whose interests you prioritize. Seeing current AI development as part of longer patterns of technological change and resistance.
Agency might have a parallel development. Early agency is personal refusal or adoption. I won’t use this tool because I don’t like it. I will use this tool because it helps me. The reasoning might be simple, but the exercise of choice is real.
Developing agency includes justification and persuasion. Not just choosing but being able to explain the choice, defend it to others, and maybe convince others to make similar choices. This requires articulating values, weighing tradeoffs, and engaging with counterarguments.
Mature agency involves institutional and collective action. Not just personal choice, but working to change the options available to others. Participating in policy discussions. Building alternatives. Organizing collective refusals or adoptions. This is agency operating at a different scale, trying to reshape the systems themselves rather than just navigate them.
The question for educators is how to create learning experiences that help students progress through these stages. And here’s where it gets tricky: you can’t just start at the bottom and work up. A sixteen-year-old doesn’t need to begin with the simplest forms of awareness and agency. But they also might not be ready for the most sophisticated forms. The right starting point depends on the learner’s prior experience, the context they’re learning in, and what questions they’re already asking.
What Productive Failure Looks Like
One thing that’s clear: building agency requires opportunities to fail.
Students need to make choices about AI use that turn out poorly, reflect on what went wrong, and revise their decision-making framework. You can’t develop good judgment without exercising bad judgment first and learning from it.
But educational institutions are terrible at allowing this kind of failure. Schools optimize for avoiding mistakes, for getting right answers, and for following established procedures. The whole structure pushes against the kind of experimentation that building agency requires.
Imagine a student decides to refuse an AI tool that the rest of their group is using for a collaborative project. Maybe they have good ethical reasons. Maybe their reasons are confused, or they’re not thinking through all the implications. Either way, their refusal makes the group’s work harder. Other group members are annoyed. The project takes longer, and the results aren’t as polished as they would have been with the tool.
That’s a productive failure if—and this is crucial—there’s space to reflect on it afterward. What were the actual consequences of refusing the tool? Were they worth the ethical stance? What could have been done differently to maintain the ethical position while reducing the costs to the group? How did the power dynamics within the group affect who got to make the choice?
But if the student just gets a bad grade because their project wasn’t as good, and there’s no space for that reflection, then it’s just failure. They learn that ethical agency comes with penalties and probably learn not to exercise it next time.
Creating space for productive failure means building in time for reflection, de-emphasizing grades relative to learning, and making the ethical dimensions of choices visible and discussable. It means teachers being willing to value a well-reasoned decision that leads to worse immediate outcomes over unreflective choices that happen to work out.
This is hard to do within current educational structures, but it’s necessary. Agency develops through practice, and practice includes failure. No way around it.
The Refusal Problem
Let’s talk specifically about refusal, because this is where agency gets most concrete and most uncomfortable.
The “power of refusal” means having the option not to use an AI system when you judge that using it would conflict with your values or goals. That sounds straightforward until you start thinking about what it requires.
First, it requires that refusal is actually possible. If a system is mandatory—if you can’t complete your work or fulfill your obligations without using it—then there’s no real choice to refuse. You might object, you might use it minimally or subversively, but you can’t opt out.
Second, it requires that the costs of refusal are bearable. Even if refusal is technically possible, if it means failing a class or losing a job, most people can’t afford it. The choice becomes theoretical rather than practical.
Third, it requires that refusal is recognized as a legitimate choice rather than being treated as ignorance, stubbornness, or technophobia. If refusing an AI tool gets you labeled as someone who “doesn’t understand technology” or “is resisting progress,” the social costs might be as prohibitive as any formal penalty.
Given these requirements, how often is meaningful refusal actually possible? Less often than AI literacy advocates tend to acknowledge.
But there’s something important about teaching refusal even when it’s not always possible. The practice of evaluating whether you should use a tool, articulating reasons you might refuse it, imagining what refusal would look like—all of that builds the habits of thought that agency requires. Even if you end up using the tool because you have to, you’re using it differently when you’ve genuinely considered not using it. You’re more alert to its limitations, more critical of its outputs, more aware of what you’re giving up in exchange for its benefits.
And sometimes refusal really is possible. There are contexts with genuine choice, and students need practice recognizing those contexts and exercising choice within them. The student who’s learned to ask “do I actually need to use this tool?” will sometimes discover the answer is no, that the tool was optional all along but everyone else just assumed it was necessary.
So teaching refusal isn’t about creating a generation of stubborn technophobes. It’s about teaching people to recognize where they have choices, evaluate those choices carefully, and be willing to choose differently when they have good reasons.
Agency as Citizenship
There’s a final dimension worth considering. Ethical agency around AI is actually a form of citizenship, a way to participate in collective decisions about the kind of society we want to live in.
When someone refuses to use facial recognition, they’re not just making a personal choice. They’re taking a position on whether that technology should be socially acceptable, whether it should be deployed in public spaces, and what kinds of surveillance we want to normalize. Individual choices aggregate into cultural norms, and cultural norms shape what technologies succeed or fail.
This means that ethical agency has effects beyond the individual. A student who develops the habit of critically evaluating AI systems and refusing ones that conflict with their values isn’t just protecting themselves. They’re contributing to a broader conversation about what kinds of technological futures we should build toward.
But this also means agency has responsibilities beyond the individual. If your choices affect cultural norms that others have to live with, then those choices deserve to be well-informed and thoughtfully made. You have an obligation to understand what you’re refusing and why, to engage with counterarguments, and to consider effects on others.
Alas, this is what makes the pedagogy complicated. You’re not just teaching skills or dispositions. You’re teaching something close to political participation by showing how to make decisions that affect more than just yourself, balance individual autonomy with collective welfare, and engage with people who see things differently.
And you’re doing this with students who are still figuring out their own values, their own identities, their relationship to authority and institutions. The temptation is to simplify, to present certain choices as obviously right and others as obviously wrong. But that defeats the purpose. Agency means making your own well-reasoned choices, not just adopting someone else’s approved positions.
So the teaching has to be genuinely exploratory. Present dilemmas without predetermined right answers. Create space for students to disagree with each other and with the teacher. Model the kind of reasoning that leads to sound judgment without prescribing what conclusions that reasoning should reach.
This requires a lot of pedagogical confidence and humility: confidence that students can handle genuine complexity and ambiguity, and humility about your own conclusions being the only valid ones.
Back to the Beginning
We started by noting that students who understand AI’s ethical implications don’t automatically act on that understanding. Now we can see why.
Understanding creates awareness. It teaches you to see the problems. But seeing problems doesn’t give you the authority to address them, the skills to address them effectively, or the institutional structures that enable you to do so.
Agency requires all of those things. It requires experience making real choices under real constraints. It requires practice, failing, and learning from failure. It requires contexts where student judgment matters and affects outcomes. It requires collective structures for participating in decisions that individuals can’t make on their own.
Most critically, it requires a fundamental shift in how we think about AI literacy. Not as information to transmit but as capacity to develop. Not as something you learn about but as something you learn to do.
The shift is pedagogically demanding. It’s easier to teach a unit on algorithmic bias than to create authentic situations where students practice refusing biased algorithms. It’s easier to assign readings about data privacy than to structure projects where privacy tradeoffs are real and consequential.
But the harder path is necessary if we want students who don’t just understand that AI systems can be unjust, invasive, or manipulative, but who believe they have the standing to demand better ones.
That belief—that sense of authority over technology rather than subjection to it—is what separates awareness from agency. And right now, we’re producing a lot more awareness than agency. If we want different results, we need different methods.
The good news is we know what some of those methods look like. Authentic choices with real stakes. Space for productive failure. Collective decision-making structures. Models of refusal as legitimate and thoughtful rather than ignorant or stubborn.
The harder news is that these methods require giving up some control, accepting some messiness, and tolerating outcomes that aren’t optimal in the short term because they’re building capacity that matters in the long term.
But if the alternative is students who can describe in detail how AI systems fail them while feeling powerless to do anything about it, then the messiness is worth it.
Because at the end of the day, this isn’t about technology. It’s about who gets to decide. And right now, the people building AI systems have decided that they’ll decide. Teaching students to understand that isn’t enough. We need to teach them that they, too, get a vote.