AI Model Audit Logging and Review Policy
Clinicians Check
Version 1.0 | Published: 29 May 2025 | Status: Active
Next Review Due: 29 August 2025
Policy Owner: Head of AI Compliance & Clinical Integrity
Approved by: Board Governance Committee
Jurisdiction: Global – UK, EU, US, Canada, Australia, Singapore, India
1. Purpose
1.1 This policy outlines the mandatory standards by which CliniciansCheck logs, audits, and reviews artificial intelligence (AI) systems used on its platform.
1.2 It ensures legal defensibility, clinical safety, accountability, and transparency in line with the EU Artificial Intelligence Act, NHS DTAC, US Executive Order on AI, UK ICO guidance, and draft ISO/IEC 42001 standards.
1.3 The goal is to manage AI risk proactively, meet global regulatory requirements, and maintain user trust in both clinical and operational use cases of AI.
2. Scope
2.1 This policy applies to all artificial intelligence (AI) and machine learning (ML) systems used, developed, or integrated within the CliniciansCheck platform.
2.2 It includes proprietary models, open-source components, and third-party services that affect patients, clinicians, administrators, or internal platform users.
2.3 AI functions covered include clinical triage, form generation, symptom prompts, user recommendations, moderation, ranking, and automation.
3. Governance and Accountability
3.1 The designated AI Oversight Officer is the Head of AI Compliance & Clinical Integrity, who is responsible for implementation, incident escalation, audit readiness, and alignment with applicable laws.
3.2 All AI developers and suppliers must provide full documentation for any AI model or system used. This includes its intended use, training dataset summaries, limitations, validation results, and fairness or bias assessments.
3.3 All AI models used on the platform must undergo internal governance review prior to deployment or significant update.
3.4 The Board Governance Committee retains ultimate oversight of AI use across the platform and must be informed of any high-risk deployment, incident reports, or regulator engagement.
4. Audit Logging and Traceability
4.1 Every significant AI interaction is logged, including timestamps, model identifiers, version data, session context, and user references where applicable.
4.2 AI output logs are securely stored, encrypted in transit and at rest, and retained in line with the data retention policy and legal obligations.
4.3 Logs are structured to allow full forensic traceability and can be made available for investigation in cases of complaint, incident, or legal/regulatory scrutiny.
4.4 Internal monitoring ensures integrity of logs and alerts administrators to anomalies or unauthorised access attempts.
5. Audit Review Process
5.1 A quarterly audit review is conducted for all AI models currently deployed on the platform.
5.2 Immediate ad-hoc reviews are triggered when any of the following occur: a model is significantly retrained or altered; an error, complaint, or safety concern is reported; an abnormal pattern is detected; or a regulator requests information.
5.3 Each audit review evaluates model performance, accuracy, drift, fairness, alignment with purpose, safety, and explainability.
5.4 All audit activities are documented, reviewed internally, and used to inform decisions on model updates, retraining, or suspension.
6. Evaluation Standards
6.1 Models are tested for accuracy and performance across different use cases and population groups.
6.2 Fairness audits are performed to check for unintended discrimination based on gender, ethnicity, disability, age, or other protected characteristics.
6.3 All AI use must be consistent with its declared purpose and must not exceed the boundaries of clinical safety or ethical use.
6.4 AI systems must meet requirements under GDPR Article 22, the EU AI Act (Title III), the UK Data Protection Act, and NHS DTAC principles where applicable.
6.5 Any system found to breach safety, legality, or ethical obligations will be suspended and referred to the Board for review.
7. Human Oversight and Escalation
7.1 No AI system is authorised to make final or autonomous decisions in any area impacting user safety, legal status, or clinical care.
7.2 AI recommendations are advisory only and are accompanied by disclaimers and transparency notices that clearly state their non-deterministic nature.
7.3 Where clinical decisions are supported by AI, a qualified human reviewer must make the final judgement, especially where patient safety or treatment is involved.
7.4 Users have the right to challenge AI outputs, request explanations, and seek human review under data protection laws.
7.5 Escalation mechanisms are in place to handle urgent incidents or high-risk outcomes, and these are regularly tested and reviewed.
8. Retention and Access Control
8.1 AI audit logs are retained for a minimum of six years from the date of creation, unless a longer period is required under applicable law.
8.2 Logs are stored in encrypted environments with access restricted to approved compliance, legal, and security staff.
8.3 All access is logged and monitored in accordance with CliniciansCheck’s internal security and data protection protocols.
8.4 No third party or internal team may access audit logs without appropriate authorisation, and all access events are subject to audit.
9. Data Subject Rights
9.1 CliniciansCheck supports the rights of users, patients, and clinicians under all applicable privacy frameworks, including GDPR, CCPA, and LGPD.
9.2 Individuals may request details of how an AI system influenced a decision about them.
9.3 Individuals may request human intervention, an explanation of any automated result, and the right to contest it.
9.4 Requests must be directed to the AI Governance Team and will be handled within the legally required timeframe, with outcomes logged.
9.5 CliniciansCheck maintains templates and workflows to support timely, compliant responses to such requests.
10. Regulatory Alignment
10.1 This policy is aligned with the following international laws, standards, and guidance documents:
10.2 EU Artificial Intelligence Act (2024)
10.3 UK NHS Digital Technology Assessment Criteria (DTAC)
10.4 UK Information Commissioner’s Office (ICO) Guidance on AI and Explainability
10.5 US Executive Order on Safe, Secure, and Trustworthy AI (2023)
10.6 ISO/IEC 42001 Artificial Intelligence Management Systems (draft international standard)
10.7 Other national digital health and privacy laws in jurisdictions where CliniciansCheck operates, including Australia, Singapore, Canada, India, and the United States.
11. Reporting and Contact
11.1 All questions, complaints, audit requests, or AI-related concerns must be submitted in writing to the AI Governance Team.
11.2 Email communication should be directed to: ai-operationsteam@clinicianscheck.com
11.3 The AI Oversight Officer will coordinate a response, document the interaction, and take any necessary remedial or regulatory action.
12. Version Control
12.1 Version: 1.0
12.2 Date Published: 29 May 2025
12.3 Status: Active
12.4 Next Scheduled Review: 29 August 2025
12.5 Policy Owner: Head of AI Compliance & Clinical Integrity
12.6 Approved By: Board Governance Committee
12.7 Applies To: All users, vendors, contractors, clinicians, employees, and platform partners globally