Rethinking AI Literacy: From technical skills to critical engagement

Picture of a man from behind using ChatGPT on his computer

Over the past decade, coding and digital literacy were widely promoted as essential skills for thriving in a technology-driven economy. But the rapid adoption of artificial intelligence (AI), particularly generative AI, is changing that narrative. In a world where tools can now generate content, write code, and make decisions, is knowing how to use digital tools enough?

AI systems are no longer passive instruments. They produce content, offer recommendations, and influence decisions and behaviours in ways that are often opaque to users. Their outputs are shaped by patterns in data that are largely invisible, and their reasoning processes are not always transparent or easy to interpret.

As AI permeates and shapes not only the way we work, but live, the capacity to engage critically with these systems is essential, not only to use them effectively but to preserve human agency. Technical skills alone are no longer sufficient and AI literacy must go beyond simply learning how to use the tools. 

Generative AI is a game changer

Generative AI is different from a straightforward tool like a calculator. It can generate content that feels like it has been written by a person. It drafts answers or essays with confidence and fluency, often mimicking human reasoning in its tone and structure. 

These characteristics have real effects on how people learn and solve problems. Early research indicates that students who rely on ChatGPT for schoolwork may experience a decline in critical thinking, a weakening of independent problem-solving skills and over time, this can erode their ability to think through challenging questions without assistance.

On top of this, Large Language Models (LLMs) like ChatGPT are now treated as sources of information, despite their known tendency to generate inaccurate or fabricated content — the so-called “hallucinations”.

As users place more trust in these systems, they tend to apply less scrutiny to the outputs: users who rely heavily on AI are less likely to verify results, even when errors are apparent. This has implications beyond education: when drafting legal language, reviewing medical information, or summarizing news, unchecked reliance on AI tools can lead to errors with real consequences. 

We need to reconsider what digital literacy entails in the age of generative AI and teach all users how to treat AI systems carefully. We should always verify important details and know the technology’s capabilities, limits, errors and biases to safely embrace their productivity benefits.

Generative AI systems are not just instruments anymore. They are active producers of knowledge and content. To navigate this new reality, we should expand our concept of literacy.

A new definition of literacy

AI literacy is not merely a technical or academic issue, it is a civic one. It requires us to rethink who is considered “literate” in a world mediated by AI, and what kinds of knowledge we prioritize as a society. As generative AI continues to reshape communication, decision-making, and knowledge production, the ability to understand and interrogate these systems is no longer optional.

Educators should place AI at the centre of the inquiry: using it as a lens through which to cultivate critical thinking and expand students’ understanding of how such systems influence the world. This more ambitious approach treats AI not merely as a tool, but as a sociotechnical system that demands scrutiny, reflection, and imaginative rethinking of what education can be.

This redefinition gives AI literacy a profound new purpose. In a world increasingly shaped by technologies that mimic human behaviour, understanding AI is becoming inseparable from understanding the world itself. AI literacy, then, is not just about technical skills: it is about preparing people to be engaged, critical, and responsible citizens in the age of AI.

We must go beyond simply equipping learners to adapt to automation. The future of AI doesn’t need to be one where machines mimic humans: safer, more useful paths may lie in building systems that complement rather than imitate us. The question is now: What kind of future do we want, and how can education help build it?

 

Do you want to learn more about AI? Check out Mila’s AI Professional Development program: