Nothing's in my cart
6 minutes read
In today’s classrooms, a quiet shift is taking place: students are increasingly turning to AI tools and social media platforms like YouTube and TikTok as their primary sources of learning. As a result, the traditional authority of teachers is being diminished—sometimes reduced to the background as technology takes the lead.
While this shift may seem subtle, a recent report from Anthropic offers hard data to illustrate its depth. Drawing from millions of interactions with American college students using Claude, the report paints a troubling picture: more than half of these students are using AI to achieve high-level learning goals like creation and analysis. Instead of engaging in critical thinking themselves, many may be outsourcing this uniquely human skill to artificial intelligence.
What else is AI changing in the classroom? Here are four key insights from the report.
Anthropic’s report uses Bloom’s Taxonomy—a widely adopted educational framework—to classify learning into six levels: Remembering, Understanding, Applying, Analyzing, Evaluating, and Creating. The final three levels involve more complex cognitive tasks, such as analyzing a literary text, evaluating its arguments or style, and creating an original response or essay.
Traditionally, these higher-order tasks were considered essential for students to tackle on their own, with guidance from human teachers. But the report reveals that AI is increasingly stepping into this role, highlighting both the advantages and potential downsides of integrating AI into education.
According to the data, students most frequently use Claude for Creating (39.8%) and Analyzing (30.2%)—the highest tiers of cognitive engagement. In contrast, lower-level tasks such as Applying (10.9%), Understanding (10.0%), and Remembering (1.8%) are far less common.
This suggests a significant shift: students aren’t merely using AI as a study aid—they’re outsourcing some of the most intellectually demanding aspects of learning. The relationship between student and AI has moved from assistance to delegation, with many now relying on AI to think in their place, not just alongside them.
According to Bloom’s Taxonomy, Claude is most often used for higher-order tasks like creation and analysis, while lower-order skills such as remembering, understanding, and applying are much less common—raising concerns that students may be outsourcing critical thinking to AI. (Source: Anthropic)
The tendency to "let AI think for them" is further supported by statistics in the report, which show that nearly half (47%) of student-AI interactions are categorized as direct interactions.
The report outlines four types of AI usage, with direct interactions typically involving minimal dialogue. Students use AI to solve math problems, generate summaries, or help with writing—focusing purely on the end result, with little interest in the process behind it.
Anthropic flags this type of interaction as particularly concerning in the context of academic integrity. It includes queries like “solve these multiple-choice questions,” “give me exam answers,” or “rewrite my business report to avoid plagiarism detection.”
While not all usage qualifies as cheating, this outcome-driven approach is undeniably reshaping educational norms—and further blurring the line between assistance and academic dishonesty.
The chart illustrates four modes of student interaction with Claude. The vertical axis shows interaction style (from direct to collaborative), while the horizontal axis reflects the type of outcome (from problem-solving to content creation). Examples range from solving and explaining calculus problems (top left) to providing feedback and revisions on writing assignments (bottom right). (Source: Anthropic)
It’s worth asking: if students’ assignments are no longer the result of their own thinking, will future classrooms also face a moment when “my lesson plan isn’t my lesson plan”?
The report shows that AI use isn’t limited to solving problems or writing papers—it now extends into teacher preparation. Education majors are the most likely to use Claude for output creation, accounting for 74.4% of interactions in the education field—far outpacing any other discipline.
Specifically, students studying to become teachers are using AI to generate full lesson plans, design classroom activities, and draft instructional materials. In other words, the very people training to be human educators are already outsourcing the act of teaching to machines.
This isn’t entirely negative—AI could help reduce teacher workload and alleviate stress. After all, the most exhausting parts of education often lie outside actual teaching. But when both students and teachers begin to rely on AI for the cognitive heavy lifting, one can’t help but wonder: where does the human value that education is meant to cultivate go?
Distribution of interaction styles with Claude across academic disciplines. Natural sciences and mathematics lean toward problem-solving, while computer science shows the highest rate of collaborative problem-solving. Education majors most often use Claude for collaborative content creation (40.6%), followed by direct output creation (33.8%). (Source: Anthropic)
Another factor potentially accelerating AI-driven learning in education is the uneven adoption of AI across academic disciplines. The report highlights a striking imbalance: while computer science accounts for only 5.4% of U.S. college degrees, it represents 38.6% of all Claude usage—likely due to Claude’s strong programming capabilities.
Other STEM fields—such as natural sciences, mathematics, and engineering—also show disproportionately high usage, whereas disciplines like business, the humanities, and health professions lag far behind in relation to their student populations. This suggests that the benefits of AI-enhanced learning are not evenly distributed. Students who leverage AI can complete assignments more quickly and efficiently, while those without access—or who choose not to use it—may unintentionally fall behind, deepening the divide between fields.
This trend also raises broader concerns. As AI systems are optimized for structured, modular tasks, fields like the humanities—which rely on dialectical reasoning, ethics, and interpretation—may be perceived as “inefficient” or even “irrelevant.” Could this dynamic further entrench a value hierarchy in education, pushing the humanities into a prolonged deep freeze?
It might not be time to despair just yet.
Alternatively, the humanities may take on a new mission: becoming the ethical gatekeepers of the AI era. Teachers could shift from being mere transmitters of knowledge to facilitators of dialogue and moral reflection. Philosophy might help us interrogate the boundaries between humans and AI. Literature can still offer emotional depth, ambiguity, and lived experience—things no algorithm can fully reproduce.
This perspective aligns with the Microsoft 2023 Future of Work Report, which notes that while AI can assist with creative tasks, humans still need to bring the creativity—and maintain control. In other words, those who excel in creativity and critical thinking won’t be replaced by AI; they’ll be empowered by it. And the very spirit of the humanities is rooted in that critical thought.
Perhaps the question now is not whether AI will replace teachers, but what kind of teachers—and what kind of thinkers—we need in an AI-driven world.