Lunch and Learn - Session 8: Test if your AI is ready for real work

Find out how the Australian Prudential Regulation Authority (APRA) is evaluating generative AI tools for operational use. This session explores APRA’s structured testing framework, developed in collaboration with ASIC and RBA, to assess model performance, governance, and traceability. It’s a landmark case study in responsible AI adoption in a high-stakes regulatory environment, showing how agencies can balance innovation with accountability.
Presenter

Adrian Waddy
Adrian is currently Head of the Data Science Hub at APRA, a role that encompasses being responsible for the Data Science Lab, Analytical Enablement and Suptech Exploration functions within the organisation. Prior to this he was the Technical Lead on the Bank of England’s $25 million dollar implementation of their first big data platform, before moving to the PRA to build up their Data Science and Advanced Analytics function.
Adrian has presented widely to the international central bank and regulation community on Big Data, Data Science and AI. He also wrote the Chapter “Implementing Big Data solutions” for the ECB curated book “Data Science in Economics and Finance for Decision Makers”. He is passionate about enabling talented people to have a positive effect on the communities they serve through Data Science.
Facilitator
Daniel Parris
Daniel Parris is an Engagement Manager with GovAI, a whole-of-government program in the Department of Finance working to enable responsible and effective AI adoption across the Australian Public Service (APS). In his role, Daniel works with agencies to accelerate AI adoption through information sharing, capability building, and supporting cross-agency connections. He also oversees the APS AI Use Case Library, which is a central GovAI resource showcasing real-world AI applications in Government, promoting reuse, and fostering innovation across the public sector.
Before joining GovAI, Daniel worked across governance, sustainability, and policy settings. This included work on APS climate reporting, ESG research and methodology design for global investors, and academic research on sustainable finance and climate risk. Daniel has also facilitated group programs in diverse settings, from interactive public service workshops to immersive cultural experiences in remote Indigenous communities.
Participant benefits
Learn how to evaluate AI tools before deploying them in real-world settings.
Understand the importance of governance, traceability, and SME oversight.
Gain insights into cross-agency collaboration and shared learning in AI.
Suitable for
All Staff
Category and User level
This learning experience aligns with the Digital Profession at the Foundation level.
Price
Free of charge.
Additional Information
- To enrol in this session you will need a valid APSLearn profile.
- Steps on how to create an APSLearn profile or to view FAQs can be found here.
- If you have moved Departments or you have multiple APSLearn profiles, you can request to merge your profiles here.
- If your training venue is at MoAD, please refer to the directional map to assist with locating the training room.
* Capacity limits may apply. Please log into APSLearn to check availability.
- If the session is full or none of the additional sessions are suitable, you can express your interest, through APSLearn, for this event. You will be added to a mailing list to be notified when a new session becomes available.