AI-Assisted Decision-Making & Clinical Safeguards Statement
Clinicians Check Global Group
Effective Date: 27/05/2025
Version: 1.2
Owner: Legal & Compliance Division
1. Purpose
This statement outlines how CliniciansCheck deploys artificial intelligence (AI) technologies to enhance user experience, support decision-making, and safeguard clinical integrity. It sets out global best practices for safe, ethical, and compliant AI use within a healthcare-related platform.
2. AI Scope & Use Cases
AI is integrated into the platform in non-diagnostic, supportive capacities only. This includes but is not limited to:
-
Search and matching algorithms;
-
Content summarisation and recommendations;
-
Language support and translation;
-
Patient navigation and FAQs (via GIGI, our AI guide);
-
Marketplace optimisation and fraud prevention;
-
Profile generation and automation tools.
-
AI does not make clinical decisions, provide medical advice, or replace qualified healthcare professionals.
3. Clinical Safeguards
3.1 Human Oversight: All outputs from AI systems are subject to human review by qualified personnel. Where AI provides suggestions, these are clearly labelled as non-binding and informational.
3.2 AI in Patient Interactions (GIGI):
-
GIGI is trained only on verified, public-facing platform content;
-
GIGI cannot generate or interpret medical records;
-
Patients are informed they are speaking to an AI and given the option to escalate to a human agent.
3.3 Professional Use: Any AI-generated data provided to clinicians is to assist, not replace, professional judgment. Use of AI in profile generation is disclosed and auditable.
4. Legal & Regulatory Frameworks
CliniciansCheck ensures AI deployment complies with:
-
UK Data Protection Act 2018 & GDPR (EU & UK) – Articles 13-22
-
EU AI Act (2024 Draft + High-Risk Classification)
-
US Algorithmic Accountability Act (2022 Proposal)
-
California Privacy Rights Act (CPRA)
-
OECD AI Principles (2019)
-
UNESCO Recommendation on the Ethics of AI (2021)
-
Australia’s AI Ethics Principles
-
Singapore Model AI Governance Framework
5. Transparency & Explainability
5.1 All users are notified when AI is in use.
5.2 Clear disclaimers are shown where AI-generated content may influence decision-making.
5.3 Any AI that plays a role in content ranking, seller profiling, or pricing is subject to internal audit.
6. AI Risk Management & Bias Mitigation
6.1 Regular audits are conducted on AI training data, usage logs, and performance outputs.
6.2 CliniciansCheck employs de-biasing techniques and representative datasets.
6.3 AI is never used in a way that impacts protected characteristics (e.g., gender, race, religion) in a discriminatory way.
7. Limitations & Liability
7.1 AI is used to augment, not replace, human decision-making.
7.2 CliniciansCheck is not liable for actions taken based solely on AI outputs.
7.3 Users agree to use AI features in accordance with this statement and the platform’s Terms of Use.
8. Future-Proofing & AI Evolution
As AI laws evolve, CliniciansCheck is committed to updating this policy to remain aligned with:
-
Emerging global AI regulation;
-
Industry best practices in healthcare AI;
-
Continuous stakeholder and user feedback.
9. Contact
Questions or concerns about AI usage can be addressed to: operationsteam@clinicianscheck.com
“AI may assist, but it does not replace care. We govern our systems with the same scrutiny, integrity, and ethical consideration as our human experts.” — Legal & Compliance Division, CliniciansCheck Global Group