Mount Saint Vincent University

09/10/2025 | Press release | Distributed by Public on 09/10/2025 07:31

Social media is teaching children how to use AI. How can teachers keep up

Artificial intelligence (AI) is reshaping how students write essays, practise languages and complete assignments. Teachers are also experimenting with AI for lesson planning, grading and feedback. The pace is so fast that schools, universities and policymakers are struggling to keep up.

What often gets overlooked in this rush is a basic question: how are students and teachers actually learning to use AI?

Right now, most of this learning happens informally. Students trade advice on TikTok or Discord, or even ask ChatGPT for instructions. Teachers swap tips in staff rooms or glean information from LinkedIn discussions.

These networks spread knowledge quickly but unevenly, and they rarely encourage reflection on deeper issues such as bias, surveillance or equity. That is where formal teacher education could make a difference.

Beyond curiosity

Research shows that educators are under-prepared for AI. A recent study found many lack skills to assess the reliability and ethics of AI tools. Professional development often stops at technical training and neglects wider implications. Meanwhile, uncritical use of AI risks amplifying bias and inequity.

In response, I designed a professional development module within a graduate-level course at Mount Saint Vincent University. Teacher candidates engaged in:

  • Hands-on exploration of AI for feedback and plagiarism detection;
  • Collaborative design of assessments that integrated AI tools;
  • Case analysis of ethical dilemmas in multilingual classrooms.

The goal was not simply to learn how to use AI, but to move from casual experimentation to critical engagement.

Critical thinking for future teachers

During the sessions, patterns quickly emerged. Teacher candidates were enthusiastic about AI to begin with, and remained so. Participants reported a stronger ability to evaluate tools, recognize bias and apply AI thoughtfully.

I also noticed that the language around AI shifted. Initially, teacher candidates were unsure about where to start, but by the end of the sessions, they were confidently using terms like "algorithmic bias" and "informed consent" with confidence.

Teacher candidates increasingly framed AI literacy as professional judgment, connected to pedagogy, cultural responsiveness and their own teacher identity. They saw literacy not only as understanding algorithms but also as making ethical classroom decisions.

The pilot suggests enthusiasm is not the missing ingredient. Structured education gave teacher candidates the tools and vocabulary to think critically about AI.

Inconsistent approaches

These classroom findings mirror broader institutional challenges. Universities worldwide have adopted fragmented policies: some ban AI, others cautiously endorse it and many remain vague. This inconsistency leads to confusion and mistrust.

Alongside my colleague Emily Ballantyne, we examined how AI policy frameworks can be adapted for Canadian higher education. Faculty recognized AI's potential but voiced concerns about equity, academic integrity and workload.

We proposed a model that introduced a "relational and affective" dimension, emphasizing that AI affects trust and the dynamics of teaching relationships, not only efficiency. In practice, this means that AI not only changes how assignments are completed, but also reshapes the ways students and instructors relate to one another in class and beyond.

Put differently, integrating AI in classrooms reshapes how students and teachers relate, and how educators perceive their own professional roles.

When institutions avoid setting clear policies, individual instructors are left to act as ad hoc ethicists without institutional backing.

Embedding AI literacy

Clear policies alone are not enough. For AI to genuinely support teaching and learning, institutions must also invest in building the knowledge and habits that sustain critical use. Policy frameworks provide direction, but their value depends on how they shape daily practice in classrooms.

  1. Teacher education must lead on AI literacy. If AI reshapes reading, writing and assessment, it cannot remain an optional workshop. Programs must integrate AI literacy into curricula and outcomes.
  2. Policies must be clear and practical. Teacher candidates repeatedly asked: "What does the university expect?" Institutions should distinguish between misuse (ghostwriting) and valid uses (feedback support), as recent research recommends.
  3. Learning communities matter. AI knowledge is not mastered once and forgotten; it evolves as tools and norms change. Faculty circles, curated repositories and interdisciplinary hubs can help teachers share strategies and debate ethical dilemmas.
  4. Equity must be central. AI tools embed biases from their training data and often disadvantage multilingual learners. Institutions should conduct equity audits and align adoption with accessibility standards.

Supporting students and teachers

Public debates about AI in classrooms often swing between two extremes: excitement about innovation or fear of cheating. Neither captures the complexity of how students and teachers are actually learning AI.

Informal learning networks are powerful but incomplete. They spread quick tips, but rarely cultivate ethical reasoning. Formal teacher education can step in to guide, deepen and equalize these skills.

When teachers gain structured opportunities to explore AI, they shift from passive adopters to active shapers of technology. This shift matters because it ensures educators are not merely responding to technological change, but actively directing how AI is used to support equity, pedagogy and student learning.

That is the kind of agency education systems must nurture if AI is to serve, rather than undermine, learning.

Johanathan Woodworth, Assistant Professor, Education, Mount Saint Vincent University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Mount Saint Vincent University published this content on September 10, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on September 10, 2025 at 13:31 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]