Clinical AI Use Catalogue
Clinicians Check Version 1.0 | Published: 29 May 2025 | Status: Active Next Review Due: 29 November 2025 Policy Owner: Head of AI Compliance & Clinical Integrity Approved by: Board Governance Committee Jurisdiction: Global – UK, EU, US, Canada, Australia, Singapore, India
1. Purpose
1.1 This document provides a clear and transparent overview of how artificial intelligence (AI) is used across the CliniciansCheck platform in clinical, operational, and user-interaction contexts.
1.2 It supports regulatory compliance, user rights under data protection laws, and ethical clinical integrity by providing insight into which AI systems are used, for what purpose, and under what safeguards.
1.3 This catalogue does not list proprietary technical specifications but outlines categories of use and the safeguards applied.
2. Scope
2.1 This catalogue applies to all AI and machine learning systems embedded within the CliniciansCheck platform.
2.2 It includes AI systems used for triage support, user profiling, content structuring, automation, and decision support in non-diagnostic settings.
2.3 The AI systems described are subject to human oversight and do not operate autonomously in clinical or high-risk environments.
3. Summary of AI Use Cases
3.1 Clinician Profile Categorisation AI is used to organise and present clinician profiles based on user search behaviour, clinical specialty, service relevance, and verified credentials.
3.2 Health and Service Triage Assist For patients or users browsing services, AI helps surface relevant service categories or practitioner types. It does not diagnose, predict, or replace clinical judgement.
3.3 Patient/Clinician Matching Guidance AI systems assist users in identifying clinicians based on need, location, availability, and specialty. The system offers guidance, not assignments or enforced pairings.
3.4 Form Completion and Intake Support AI may support the completion of intake forms by pre-filling non-sensitive sections or assisting with form navigation. Final responsibility always remains with the user.
3.5 Automated Messaging and Guidance AI helps craft messages and responses during platform interactions, such as appointment requests or service inquiries. AI is never used to deliver medical advice or interpret clinical data.
3.6 Marketplace Search Optimisation AI is used to enhance search relevance by interpreting free-text input and matching it to indexed provider profiles and service tags.
3.7 Content Structuring and Clean-Up AI supports formatting, spell-checking, and cleaning of descriptions submitted by users or sellers to improve clarity, without altering clinical meaning.
4. Safeguards and Oversight
4.1 All AI systems used on the platform are subject to formal oversight by the Head of AI Compliance & Clinical Integrity.
4.2 All outputs presented to users are advisory, non-deterministic, and include clear instructions and limitations where applicable.
4.3 Any interaction with AI is logged and traceable for auditing, review, or compliance inspection.
4.4 No AI system used on CliniciansCheck is authorised to deliver a diagnosis, prescribe treatment, or make autonomous healthcare decisions.
4.5 Human review, intervention, and appeal mechanisms are available for any AI-supported interaction on the platform.
5. Risk Classification and Impact
5.1 AI systems on CliniciansCheck are classified as low to moderate risk under the EU Artificial Intelligence Act and equivalent international standards.
5.2 No system meets the threshold for classification as a high-risk AI system, as defined by Title III of the EU AI Act, because:
The platform does not perform diagnostic, prognostic, or treatment-decision-making tasks.
All services are mediated by qualified humans.
No AI systems are used in medical devices or regulated software-as-a-medical-device functions.
5.3 CliniciansCheck conducts regular audits and drift assessments to confirm that AI behaviour remains within its approved classification and declared function.
6. Explainability and User Rights
6.1 All AI-generated suggestions or structuring are clearly identified and are not hidden from users.
6.2 Users are informed of their right to seek clarification or request that a human override or re-evaluate any AI output they believe is inaccurate, misleading, or inappropriate.
6.3 AI systems are not used in sensitive personalisation contexts such as credit scoring, insurance risk classification, or behavioural tracking.
6.4 Requests related to AI processing or explanation should be submitted to: Email: operationsteam@clinicianscheck.com
7. Regulatory Alignment
7.1 This catalogue has been published to demonstrate compliance with the following:
7.2 EU Artificial Intelligence Act (2024)
7.3 UK NHS Digital Technology Assessment Criteria (DTAC)
7.4 UK ICO Guidance on Explainable AI
7.5 US Executive Order on Safe, Secure, and Trustworthy AI (2023)
7.6 ISO/IEC 42001 (Artificial Intelligence Management Systems – draft)
7.7 National health, privacy, and digital safety laws across all CliniciansCheck operating regions
8. Version Control
8.1 Version: 1.0 8.2 Date Published: 29 May 2025 8.3 Status: Active 8.4 Next Scheduled Review: 29 November 2025 8.5 Policy Owner: Head of AI Compliance & Clinical Integrity 8.6 Approved By: Board Governance Committee 8.7 Contact Email: operationsteam@clinicianscheck.com 8.8 Applies To: All platform users, patients, clinicians, vendors, and AI service providers