Using pre- and post-assessments to gauge students' information literacy growth.

Discover how pre- and post-assessments reveal students' information literacy growth. Baselines shape targeted instruction, post-results guide tweaks, and data-driven checks beat gut feelings. A practical, reader-friendly guide for educators and librarians seeking measurable progress.

Assessing information literacy in the classroom isn't about a single moment of truth. It's a steady, data-informed conversation between what students can do now and what they can grow into with thoughtful guidance. If you’re working with students who will encounter the GACE Media Specialist assessment at some point, you’ll want a clear, humane way to track that growth over time. A simple, powerful approach is implementing pre- and post-assessments to measure progress. Let me explain why this two-step rhythm matters and how to put it into action without turning the classroom into a test factory.

Baseline clarity: starting where students are

Imagine setting out on a journey without a map. Pre-assessments are that map for information literacy. They give you a snapshot of each student’s current abilities—how they search, how they judge credibility, how they organize and cite sources, and how they recognize bias. With this baseline, you don’t guess about who needs extra help; you know. You can group students for targeted guidance, fine-tune lessons, and plan tasks that actually push everyone to grow rather than repeat what they already know.

But a baseline isn’t just about “finding gaps.” It’s also about recognizing strengths. Some students might shine at recognizing authority, while others excel at tracking down diverse perspectives. The beauty of a well-structured pre-assessment is that it respects both sides of the spectrum—where students struggle and where they already show promise.

Measuring growth: post-assessments that tell a story

Post-assessments come in to answer the big question: what changed? They aren’t a quarantine snapshot, they’re a productive record of progress. When you compare post-assessment results to the baseline, you can see shifts in how students find information, how they evaluate it, and how they use it ethically. Do they choose more credible sources? Have they learned to flag bias or misinformation more reliably? Can they articulate the reason behind a source’s trustworthiness? Those are the kinds of outcomes that matter in real classrooms, regardless of any external testing cycles.

A clean, data-driven loop

Here’s a straightforward way to think about the two assessments as part of an ongoing loop:

  • Start with clear, standards-aligned outcomes. Tie what you’re assessing to established information-literacy benchmarks (for example, the AASL standards or your state’s framework). This keeps your work aligned with professional expectations, not just a single test.

  • Design a task that captures real work. Both the pre- and post-assessments should resemble authentic academic tasks—searching for sources on a topic, evaluating mindfully, summarizing findings, and citing them correctly. Use performance tasks rather than multiple-choice questions alone to reveal depth of understanding.

  • Use a consistent rubric. A rubric that scores credibility assessment, source relevance, bias detection, and citation correctness helps you translate performance into meaningful feedback. It also makes growth visible across students and across time.

  • Collect, compare, adjust. After the post-assessment, look for patterns: Are certain skills improving faster than others? Do you notice persistent gaps in digital ethics or citation practices? Let that data guide your next unit, your library lessons, and your collaboration with teachers across subjects.

What to measure: the heart of information literacy

To keep things practical, frame the assessment around a few core capabilities:

  • Finding information efficiently. Are students using effective search strategies? Can they adapt keywords and use filters to narrow results without losing important perspectives?

  • Evaluating sources. Do students assess authority, accuracy, bias, currency, and relevance? Can they justify why a source is credible in the context of a given topic?

  • Synthesizing and omitting responsibly. Do students pull together multiple sources to form a coherent understanding? Do they paraphrase accurately and avoid plagiarism?

  • Citing and ethical use. Can students cite sources properly and explain why ethical use matters in research and media literacy?

  • Communicating findings. Do students clearly explain how they decided which sources mattered and how their conclusions were supported?

Designing pre- and post-assessments: a practical blueprint

Here’s a simple recipe you can adapt without overhauling your whole schedule:

  • Choose a topic with real-world relevance. It could be a current event, a local issue, or a library-related topic that ties into media creation or information literacy.

  • Create a paired task set. For the pre-assessment, include a small search task, a brief source evaluation, and a short reflection on how they would use at least two sources. For the post-assessment, give a more involved scenario: a mini-research report that requires finding sources, evaluating them, and presenting findings with citations.

  • Use a rubric that’s easy to apply. A 4-point scale for each criterion (e.g., 4 = exceptional, 3 = solid, 2 = developing, 1 = emerging) helps you see growth clearly. Keep the descriptors concrete: What does strong source evaluation look like? What counts as a credible citation?

  • Incorporate a reflective piece. A short prompt asks students to explain any changes in their approach—what they found harder, what they learned, and how they would approach similar tasks next time. This meta-cognition piece is often where you see real growth.

  • Choose practical delivery methods. Use familiar tools: Google Forms or Microsoft Forms for the assessment itself; a shared Google Doc or a learning management system (LMS) page for the post-task write-up; and a simple rubric embedded in the form or attached as a separate PDF. The goal is smooth, predictable workflows.

Bringing tools into the mix

Technology is a helpful ally here, not a gimmick. The right tools support clarity and feedback without turning the process into a tech hurdle.

  • Forms for efficiency. Pre- and post-assessments in forms let you collect data quickly and annotate responses as you go. You can set up auto-grading for certain items and keep everything organized in one place.

  • Rubrics that travel. A well-structured rubric travels across subjects. Share it with students at the start so they know exactly what good work looks like. When students see the criteria, their efforts become more intentional.

  • Annotated bibliographies as evidence. For many information-literacy tasks, an annotated bibliography provides a tangible record of how students evaluate, select, and use sources. It’s a neat bridge between the cognitive work and the final product.

  • The library as a partner. School libraries aren’t just storage for books. They’re hubs for research skills, digital citizenship, and ethical use of information. Tap librarians for rubric calibration, example tasks, and feedback loops. A quick co-planned lesson can reinforce what the assessments reveal.

When to rely on subjective feedback, and when to skip it

You’ll hear suggestions to lean solely on teacher impressions, or to fill the day with informal observations. Here’s the important distinction: while your instincts and narrative notes are valuable, they aren’t a stand-alone measure of progress. Subjective feedback without a clear, structured framework can skew toward isolated moments rather than a trajectory of growth.

Similarly, informal observations have a place. They’re excellent for spotting trends during a unit and catching moments that spark a quick reteaching pivot. But they’re most powerful when paired with consistent, objective measures—the pre- and post-assessments are the anchors here.

On the flip side, ignoring assessments altogether misses a golden opportunity to refine instruction and demonstrate student learning. Data, when used thoughtfully, isn’t a punitive tool. It’s a compass that helps you tailor lessons, justify decisions, and celebrate gains.

A few tips to keep it human and useful

  • Keep the tasks authentic. Students should feel they’re doing real work, not ticking off a checkbox. Real-world sources, current topics, and credible outlets make the tasks relevant and engaging.

  • Balance challenge with support. Use tiered tasks or scaffolds so students at different levels can show growth without frustration. Short, focused prompts help with clarity.

  • Connect growth to classroom life. Share anonymized, aggregated results with your students and show how the curriculum adapts to their needs. When learners see the loop from task to feedback to improvement, motivation follows.

  • Be transparent about what literacy looks like in practice. Help students see that information literacy isn’t about “getting it right” once; it’s about a disciplined approach to seeking, judging, and using information responsibly across contexts.

Turning assessment into a habit, not a moment

The ultimate aim isn’t a single score or a single unit. It’s cultivating a habit of thoughtful information use that travels beyond the classroom. The two-step assessment rhythm—start with a baseline, end with a growth-focused post—helps you quantify what matters without sacrificing the humanity of learning. It also mirrors the real work media specialists do: guiding others to be smarter about how they search for, evaluate, and share information.

If you’re guiding students who will engage with the GACE Media Specialist assessment, this approach has practical payoff. It gives you the evidence you need to tailor instruction, it respects students as learners with individual pathways, and it creates a transparent, ongoing dialogue about information literacy. You’ll find that the data isn’t just a score on a page; it’s a story of skill development—one that you and your students write together, step by step.

A closing thought

Information literacy isn’t a box to check; it’s a cultural muscle we cultivate in classrooms, libraries, and media spaces. Pre- and post-assessments aren’t about trapping students in a test cycle. They’re about tracking growth in the kind of thinking that matters when a person faces a flood of information—how to search wisely, how to judge credibility, and how to use evidence with integrity. When you bring that clarity to your planning, you’ll see not just students who know more, but learners who think more-making, more reflective, and more capable of navigating the information-rich world we all share.

If you’re comfortable sharing a quick plan, I’d love to hear how you’re planning to pair a pre- and post-assessment in your next unit. What topic will you use? What would a strong post-response look like in your classroom? The conversation around information literacy can be as dynamic as the information landscape itself—and that’s exactly where it belongs.

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy