Pre-use Assessment Review and the 2025 Standards for RTOs
We are all becoming more and more familiar with the nuances of the 2025 Standards for RTOs and there has been much discussion around the requirements associated with “pre-assessment validation” of assessment tools, or “pre-use review” of assessment tools, or other terms that have been used interchangeably to refer to a process that Registered Training Organisations (RTOs) need to be implementing. Regardless of the semantics and the term we use to describe this process, the intent is clear in Standard 1.3: The assessment system is fit-for-purpose and consistent with the training product.
When we further delve into this, the Performance Indicators for Standard 1.3 (b) require that assessment tools be reviewed prior to use in a process that represents what we have traditionally referred to as validation of assessment tools, and we are making improvements based on the review.
“An NVR registered training organisation demonstrates:
a. the assessment is consistent with the requirements of the training product;
b. assessment tools are reviewed prior to use to ensure assessment can be conducted in a way that is consistent with the principles of assessment and rules of evidence set out under Standard 1.4; and
c. the outcomes of any such reviews inform any necessary changes to assessment tools.”
The Practice Guide provided by the Australian Skills Qualification Authority (ASQA) highlights several ways RTOs can approach these performance indicators, including consulting with industry, moderating with trainers and assessors, and trialling the tool. As previously mentioned, we can also take the approach of “validation” of the assessment tool prior to use; however, this is different to the validation that we would be undertaking after the tool has been administered and used with learners, as we will have different information available at that point (such as real learner data from completing the assessments).
Our pre-use “review” or “validation” does not need to be done in a way that is true to the requirements of validation under Standard 1.5 (specifying validator credentials, timing and schedules). This article focuses on the pre-use review rather than the validation requirements of Standard 1.5, which is an important distinction to keep in mind as we explore how to approach this pre-use review.
Methods for Conducting a Pre-Use Review
As RTOs seek efficient ways to meet the requirements of Standard 1.3 (and subsequently Standard 1.4), Generative Artificial Intelligence (AI) has emerged as a promising solution for quality assurance. While AI offers undeniable benefits in speed and efficiency, a total reliance on algorithms for assessment review presents significant risks to compliance, data security, and the quality of learner outcomes.
The 2025 Standards set a high bar for assessment quality. The pre-use review required by Standard 1.3 is not a simple tick-box exercise (we’ve all seen the templates over the years where the middle column is ticked and there are no comments down the right-hand side!). Standard 1.4 further specifies that assessment must adhere to the principles of fairness, flexibility, validity, and reliability, with judgments based on evidence that is valid, sufficient, authentic, and current. None of this is new – the principles of assessment and rules of evidence have been around for quite some time, although the definitions of these have been tweaked, and we therefore need to be adjusting what we are looking for as part of this review.
AI-powered tools can rapidly analyse assessment documents against compliance checkpoints, identify mapping gaps, and generate reports, often at an attractive price point. This can be a powerful solution for RTOs balancing tight budgets and deadlines. However, the dangers of relying solely on AI are profound and can undermine the very quality the Standards seek to uphold.
Lack of Contextual Insight: An algorithm cannot understand the specific nuances of your learner cohort, delivery mode, or unique industry context. It may approve an assessment tool that is technically mapped but practically unsuitable for your students. True quality assurance requires strategic and contextual insight that automated systems simply cannot provide. However, a human with oversight can add to the process and make decisions about where something is or isn’t appropriate, and the heavy lifting done by the AI tool is beneficial when it is then presented for review by a VET expert.
Intellectual Property and Data Security Risks: A significant concern for RTOs is the protection of their intellectual property. Using public-facing or free AI models can mean your proprietary assessment tools are uploaded and potentially used to train external models, creating a major security breach. The business approach of these Large Language Models (LLMs), such as ChatGPT, often involves using submitted data to train future versions of the AI, effectively absorbing your proprietary content into their system. Even when using premium, paid AI platforms, robust security protocols are necessary to mitigate data risks. For organisations with strict data sovereignty policies, uploading IP to any external platform is a non-starter. Many government organisations do not allow their intellectual property to be uploaded to AI platforms (or even some external servers for storage), and staff must ensure they are across the data policy of their employer before engaging with any AI tools.
The "Compliance Checklist" Illusion: AI is excellent at pattern matching but struggles with qualitative judgment. It can tell you if a performance criterion is mentioned, but it can't tell you if the assessment task is a valid or authentic measure of that skill in a real-world setting. This can lead to a false sense of security, where an RTO has a "compliant" checklist but an ineffective assessment system that fails to produce valid judgments or improve learner outcomes. Additionally, the definitions of the principles of assessment and rules of evidence have been slightly altered. If we were to rely on AI tools, they would likely pick up historic definitions online and experience bias in the way our assessments are analysed, rather than staying true to the current principles and rules.
The Human-Expert Advantage
This is where the role of an experienced VET professional becomes indispensable. A TAE-qualified specialist with current industry experience brings a level of analysis that AI cannot replicate.
A human expert may be more suitable in some situations:
For complex assessments or RTOs prioritising data security, a manual review ensures your intellectual property is never uploaded to an external AI platform.
An experienced VET expert reviewing tools in the context of your specific operations, providing strategic insights that strengthen your entire assessment system, not just check a box.
Applying professional judgment to determine if an assessment is truly valid and reliable, ensuring outcomes are comparable regardless of the assessor. This aligns directly with the core principles of assessment demanded by the Standards.
Unlike an algorithm, you can communicate directly with a human reviewer to ask questions and clarify recommendations, fostering a true partnership in quality assurance. While you can ask questions of an AI chatbot, there are many instances where it cannot reliably converse with you, and instead, when questioned, it will state: “You are absolutely right, I will change that immediately”, rather than be able to justify the correctness of its information.
Instead of viewing this as a choice between AI and human expertise, the most robust and efficient strategy is a hybrid one that leverages the strengths of both. This approach uses AI for what it does best: the heavy lifting. An AI tool can rapidly perform initial mapping, scan for consistency, and flag potential compliance gaps, saving hours of manual work. This frees up VET professionals to focus on the high-value analysis that only a human can provide. With the groundwork laid by the AI, the expert can apply their professional judgment to assess the contextual appropriateness for your specific learner cohort, evaluate the authenticity of practical tasks, and ensure the tools truly meet the updated definitions for the principles of assessment and rules of evidence. This human verification step is critical, acting as a safeguard against AI "hallucinations" and “confabulation” (where the model might confidently generate plausible but incorrect information) and ensuring the final output is genuinely effective in practice.
As you prepare for 2026, consider your approach to quality assurance and reliance on AI tools for critical compliance-related activities. While technology can be a powerful ally, it is not a replacement for human expertise. To build a truly robust assessment system that withstands regulatory scrutiny and delivers real value to your learners, the nuanced judgment of a VET professional is a necessity.
What Next?
If you’d like our help in conducting your Pre-use Assessment Review or assisting with validation processes, get in touch! We offer two services - a premium 100% human-based approach and a more cost-effective AI-supported approach, leaving your RTO to make the decision around how your data is handled by our team.
Disclaimer: AI was consulted in the creation of this article