
Teaching Students to Navigate AI-Generated Information
Feb 07, 2025When I was a kid, the highest form of reliable knowledge was an 8 pound Encyclopedia Britannica. When writing a research paper on the American Revolution in middle school, I remember my teacher taking the class down to the library where I found the A volume of the encyclopedia and wrote my entire essay based on that book’s entry. There wasn’t any cross-checking of the information for accuracy or inherent skepticism about the validity of what I read. Britannica was a trusted name, and the assumption was that anything contained in that publication was valid.
Wikipedia and the Shift in Information Trust
Fast forward to my first year of teaching and Wikipedia was the arch nemesis of many social studies teachers. The encyclopedia was now open-sourced, no longer a tightly curated publication from a team of professional researchers, instead written by anyone on the planet. Of course there were footnotes and sources for much of the information on Wikipedia, but there was plenty of margin for error and I remember constantly stressing to my students that Wikipedia should be a launchpad for research, but not a valid source of information.
One more fast-forward: artificial intelligence.
The Rise of Artificial Intelligence in Research
Generative AI accesses billions of sources of information and compiles it however the researcher wants. Want to know the cause of World War 2? Just ask ChatGPT or Gemini and it will give you bullet points or write you a whole essay. Need to write an opinion piece on climate change and are trying to find recent data? AI can find that for you. And like Wikipedia, these tools can even provide you sources if you ask for them.
The only problem is, sometimes these sources are made up. You click on the provided links and they don’t take you anywhere. Or the data you collect through AI is not entirely accurate, or it was pulled from biased sources and lacks context. Sometimes, the information AI provides is completely false.
Combating AI Hallucination with Critical Thinking
This is called AI Hallucination, a phenomenon where artificial intelligence generates incorrect, misleading, or entirely fabricated information while presenting it as fact. This occurs because AI models do not “think” or verify information like humans do—they predict words and patterns based on their training data, which can sometimes lead to plausible but false responses.
Users of AI— which if not already, is going to be every one of your students at some point— need to have the critical thinking skills to determine if what they find is accurate or not. The internet is riddled with incorrect mis and disinformation, and the advent of artificial intelligence will only lead to more if there is not a concerted effort to teach students how to use it ethically and responsibly.
One of my favorite ways to do this is to use the CRAAP Test.
CRAAP is an acronym that stands for Currency, Relevance, Accuracy, Authority, and Purpose. It’s a great way to evaluate sources and determine if they are crap or not (middle schoolers love that one), and it works just as well for fact-checking AI-generated responses. Whether you call it CRAAP or PAARC, it is essential to teach any student who is using the internet to gather sources and learn new ideas and information.
The CRAAP Test
C - Currency
How up-to-date is the information? If a student is writing a paper on the Israel-Palestine conflict and ChatGPT gives them a source from 2010, that’s a problem. The situation has changed drastically since then. Teaching students to check the date of their sources helps them ensure they aren’t using outdated or irrelevant information.
R - Relevance
Is the response actually answering the question? AI can sometimes generate answers that sound impressive but don’t really address what was asked. Students need to read critically and determine if the response is actually relevant to their topic or just filler.
A - Accuracy
Is the information correct? This is a big one. Students should double-check AI-generated content by looking for the same information in multiple credible sources. If they can’t find it anywhere else, that’s a red flag. They should also ask: Is this information backed by evidence? Does it align with expert knowledge on the topic?
A - Authority
Who is behind the information? When students cross-check AI’s responses with other sources, they need to consider the credibility of the authors. Do these sources come from experts in the field? Do they have relevant education, experience, or credentials? Not all sources are created equal—random blog posts don’t hold the same weight as peer-reviewed research or official reports.
P - Purpose
Why does this information exist? Is it meant to inform, teach, sell, entertain, or persuade? If a source has a clear bias or agenda, students should take its claims with a grain of salt. AI-generated content pulls from a massive range of data, and not all of it is neutral or reliable. Recognizing bias helps students make more informed decisions about what information to trust.
You can get my free printable poster for the CRAAP Test here.
Stay Connected With Trevor's Work
Join thousands of educators who receive weekly articles, videos, and inspiration from Trevor.
SPAM is the worst. I promise to only send you my best stuff and NEVER to share your email.