
Assessment is where everything gets tested: not just learners, but the qualification or programme itself. A poorly designed assessment can undermine an otherwise strong programme. It can exclude capable people, reward the wrong things, and create a gap between what a credential claims and what a holder can actually do. Good assessment design closes that gap.
We work at every level of that challenge. That includes the architecture of full qualifications and programmes, but it also includes the components within them: individual units, assessment tasks, question design, marking criteria, evidence requirements, and the judgement frameworks that determine whether a learner has met the standard. The two levels are connected: a well-structured qualification can still fail if the individual assessments within it are poorly constructed, and we understand how decisions at one level ripple through to the other.
We design qualifications and assessments that are rigorous, fit-for-purpose, and genuinely accessible. Accessible doesn’t mean easy, it means that the barriers in an assessment reflect the actual demands of the role or discipline, not the quirks of a particular format or the assumptions baked in during design. We build that thinking in from the start rather than revisiting it after the fact.
Our experience spans Level 2 to Level 5 assessments and multiple apprenticeship models in the UK, national apprenticeships and qualification alignment with corporate training programmes in New Zealand, and units of competency in Australia. Because our expertise lies in methodology and structure rather than subject matter, we’re genuinely subject-agnostic — we’ve worked across more than twenty different subject areas without that range affecting the quality of what we produce.
Automated assessment has been part of our work for over a decade. That long-standing focus on how automated tools should be structured and quality assured has fed directly into more recent work on AI-assisted assessment writing, including research with Epic Learning published through ConCOVE, which examined what good automated assessment actually looks like in a New Zealand VET context and included the development of training, policies, and a quality assurance strategy for AI-generated assessments.
The common thread across all of it: every design decision should serve the learner’s ability to demonstrate genuine competence, and the employer’s ability to trust what a credential represents.
To discuss qualifications or assessment design, contact us at stuart@georgeangusconsulting.com
