Book doctors, shop health and beauty products, and access trusted health content — in 110 languages. All in one place.

Your cart

Your cart is empty

Clinicians Check Global Group

Effective Date: 27/05/2025

Version: 1.2

Owner: Legal, Governance, and AI Ethics Division

1. Purpose

This policy ensures that all artificial intelligence (AI) systems deployed within the CliniciansCheck platform operate under a robust framework of human oversight, intervention capability, and escalation. It is designed to uphold user safety, ethical standards, and regulatory compliance across all jurisdictions, ensuring that no AI decision related to health, well-being, or data is executed without appropriate human review when required.

2. Scope

This policy applies to:

  • All AI systems, algorithms, models, and automated decision-making tools deployed on the CliniciansCheck platform;

  • All regions and territories where CliniciansCheck operates;

  • All user categories, including patients, clinicians, vendors, administrators, and regulators.

3. Legal Frameworks

CliniciansCheck complies with global legislation and best-practice guidelines including:

  • EU AI Act (2024) – Articles 14 (Human Oversight), 16 (Obligations of Providers), 29 (Post-Market Monitoring);

  • UK AI White Paper and ICO Guidance;

  • OECD AI Principles;

  • U.S. AI Bill of Rights (White House Blueprint);

  • Singapore Model AI Governance Framework;

  • ISO/IEC 23894:2023 – Risk Management for AI;

  • NHS Code of Conduct for Data-Driven Health and Care Technologies;

  • Health Canada AI Guidance (2024);

  • UNESCO AI Ethics Recommendations.

4. Key Oversight Principles

4.1 Human-in-the-Loop (HITL): All AI-driven outputs—especially clinical or reputational decisions—require explicit human review before final action is taken.

4.2 Human-on-the-Loop (HOTL): AI systems operate autonomously but under continuous human supervision with override capabilities.

4.3 Human-in-Command (HIC): All systems must include mechanisms for immediate human override and cessation of any automated action.

4.4 AI Escalation Tiers:

Tier 1 (Informational): No human review required, auto-approved (e.g., formatting, SEO tags);

Tier 2 (Reputational/Operational): Flagged to moderation team for review;

Tier 3 (Clinical/Regulatory/Identity-sensitive): Mandatory review by licensed human professional.

5. Escalation & Override Procedures

5.1 AI-generated outputs with clinical or compliance impact are automatically routed to a qualified human overseer for review and sign-off.

5.2 Users may request a human review of any AI-influenced decision via our AI Escalation Request Form.

5.3 Any AI flag that triggers regulatory, clinical, or reputational risk must be escalated within 4 business hours.

5.4 Overrides are executed by:

  • Compliance Officers (Tier 2);

  • Licensed Clinicians or Clinical Directors (Tier 3);

  • Data Protection Officer (for AI-influenced data decisions);

  • CTO or AI Ethics Lead (for algorithmic shutdown or emergency rollback).

6. Transparency & Explainability

6.1 All AI decisions must include:

  • A clear rationale for the decision;

  • Input data summaries;

  • Explanation of the algorithm's role;

  • Access to a human contact for escalation.

6.2 Users must be informed whenever an AI tool has influenced a decision that affects them.

7. Oversight of AI Agent "GIGI"

7.1 "GIGI," the AI assistant within CliniciansCheck, operates under these oversight rules:

  • Never provides standalone clinical advice;

  • Always discloses AI nature before interaction;

  • Triggers escalation for sensitive topics (e.g., mental health, suicide, self-medication);

Is regularly audited for bias, hallucinations, or unsupported outputs.

8. Recordkeeping & Auditing

8.1 All AI activity, overrides, escalations, and human reviews are:

  • Logged and timestamped;

  • Retained securely for 7 years;

  • Subject to internal quarterly audit;

  • Available for external audit by regulators or partners.

9. Enforcement

9.1 Breaches of this policy may result in:

  • Suspension of affected AI tools;

  • Regulatory reporting and notification;

  • Internal disciplinary action against staff;

  • Third-party vendor investigation.

10. Contact

Questions or concerns regarding AI oversight:

operationsteam@clinicianscheck.com

Date: 27/05/2025

Version: 1.2

"AI must serve human judgment—not replace it. Our oversight is your protection." — Legal, Governance, and AI Ethics Division