Get our latest blog posts direct to your inbox.
Reassessing Test Centre Security: A Balanced Perspective
In her article, “Why Can’t Test Centers Catch Cheating Effectively?” Sophie Wodzak of the Duolingo English Test critiques traditional test centres, highlighting their vulnerabilities in preventing cheating, and proposes that Duolingo’s technology-led approach to test security surpasses them.
While it is important to identify all the risks associated with successful in-centre exams, as it is with any form of exam delivery, it is equally essential to acknowledge how exam centres are responding to these risks, and the methods they are putting in place to ensure success.
The rapid development of technology-led remote exams has proven invaluable in times when students are unable to attend test centres, such as during the height of the Covid-19 pandemic – but this form of assessment does not come without its own risks. Unless closely managed, remote exams can fall victim to operator complacency in matters such as invigilator training and technical issues, resulting in a confusing and stressful experience for candidates. This is hardly acceptable even in a low-stakes assessment environment, but when it comes to high-stakes examinations, badly handled remote exams can have dire consequences for both candidates and awarding bodies.
In her article, Wodzak suggests that human invigilators in test centres may miss subtle cheating behaviours as a result of overseeing too many candidates at once. While it is true that human oversight isn’t infallible, many test centres have developed strategies to overcome this. As well as simply hiring more invigilators, strategies can include rigorous training programmes, standardised monitoring protocols, and the integration of surveillance technologies to assist in real-time observation.
Technological Enhancements in Test Centres
Most modern test centres have embraced the incorporation of technology to bolster security. Biometric verification, secure browser environments, and AI-driven behaviour analysis are all becoming standard in these environments. A good example of this is China’s recent college entrance examinations, known locally as ‘gaokao’. Test centres in several provinces, including Jiangxi and Hubei, incorporated AI systems to monitor candidates in real-time and flag any suspicious behaviour. In Hubei, this technology was combined with smart security screenings, ID verification checks, and human pat-downs at test centres.
It is clear how such tools help detect anomalies – such as candidates behaving unexpectedly, using unauthorised materials, or attempting to impersonate someone else – and can effectively bolster human oversight.
Addressing Proctor Misconduct
Wodzak raises concerns surrounding invigilator misconduct, such as instances where they have been found complicit in cheating schemes. But these cases, whilst serious, are rare exceptions. The vast majority of test centres and exam providers (like ourselves) have strict protocols in place – such as background checks, continuous training, and audit trails – to deter and detect any misconduct by staff.
At VICTVS, our invigilators undergo rigorous, ongoing training to ensure they deliver a world-class service to every candidate, anywhere in the world. This comprehensive training equips every member of the VICTVS Global Network to uphold the highest standards of exam security and academic integrity – while also providing real, in-person support to candidates when they need it most.
The Role of AI and Post-Test Analysis
Wodzak references the Duolingo English Test’s use of AI to monitor and analyse thousands of test sessions simultaneously, detecting irregularities. She contrasts this with in-person invigilators, who are limited to spotting only what they can see in real time. However, this overlooks the fact that many traditional testing organisations now integrate AI into in-centre assessments. These systems can flag suspicious behaviours for further review, as demonstrated in China’s gaokao exams mentioned above.
Incorporating AI tools into in-person exams ensures a multi-layered approach to test security, as opposed to simply relying on AI oversight – which comes with its own risks. False positives and negatives, and a lack of contextual judgement, are all pitfalls of using only AI to monitor or mark a candidate’s work. Combining AI with professional invigilators balances efficiency with judgement.
Conclusion
While the DET presents innovative methods for ensuring test integrity, it’s important to recognise that traditional test centres are not static; they continually evolve, adopting new technologies and methodologies to combat cheating. Both models have their merits and challenges, but a comprehensive approach that combines human oversight with technological tools – whether in person or online – is pivotal in maintaining the credibility of assessments.
Follow our blog
Search
Archives
- September 2025
- August 2025
- July 2025
- June 2025
- May 2025
- April 2025
- March 2025
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- October 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- August 2022
- January 2022
- November 2021
- October 2021
- September 2021
- July 2021
- April 2021
- December 2020
- October 2020
- September 2020
- August 2020
- May 2020
- February 2020
- January 2020
- June 2019
- May 2019
- April 2019
- February 2019
- January 2019
- July 2018
- April 2018
- December 2017
- November 2017
- May 2017
- April 2017
Recent Posts
- Why Your Remote Exams Aren’t Being Invigilated Securely
- Guest Post by VGN Member, Sandra Pragana: 99 Problems but a Whisper Isn’t One
- The VICTVS Podcast: Revision Notes – Devices of Deception
- Hidden in Plain Sight: How Wearable Technology is Putting Exam Security at Risk
- VICTVS Conference 2025: Collaborating on the Future of Assessment