Book doctors, shop health and beauty products, and access trusted health content — in 110 languages. All in one place.

Your cart

Your cart is empty

Clinicians Check Global Group

Effective Date: 27/05/2025

Version: 1.0

Owner: Legal, Ethics & AI Compliance Division

1. Purpose

This policy outlines the commitment of CliniciansCheck to ensure algorithmic fairness, prevent bias, and promote transparency in all AI-driven features and decision-making systems across our platform. It ensures that all automated processes meet global legal, ethical, and clinical integrity standards, and support health equity without discrimination.

2. Scope

This policy applies to:

  • All AI and machine learning (ML) systems deployed by CliniciansCheck;

  • All features involving automated clinician rankings, service recommendations, profile scoring, fraud detection, and user engagement tools;

  • All territories in which CliniciansCheck operates or offers services.

3. Legal & Regulatory Frameworks

CliniciansCheck adheres to and is guided by the following:

  • EU AI Act (2024 draft & amendments)

  • EU General Data Protection Regulation (GDPR) – Articles 22, 5, 35

  • UK GDPR & Data Protection Act 2018

  • US Algorithmic Accountability Act (proposed) & BIPA (Illinois)

  • OECD Principles on Artificial Intelligence

  • UNESCO Recommendation on the Ethics of AI

  • Canada’s AIDA (Artificial Intelligence and Data Act)

  • Singapore’s Model AI Governance Framework

  • Brazilian LGPD & AI Strategy

  • India’s Digital Personal Data Protection Act (2023)

4. Guiding Principles

4.1 Fairness by Design: All algorithmic systems must be designed to avoid outcomes that unfairly disadvantage users based on race, gender, disability, age, religion, geography, or socio-economic status.

4.2 Human Oversight: High-impact automated decisions must be explainable, reviewable, and reversible. A human-in-the-loop is mandated for clinical-affecting decisions.

4.3 Clinical Equity: Algorithms are tested and monitored to ensure equal performance across diverse clinical specialties, ethnicities, and international geographies.

5. Algorithmic Testing & Monitoring

5.1 Bias Testing Protocols:

  • Algorithms undergo pre-deployment and post-deployment bias impact assessments.

  • Datasets are stress-tested for demographic skew, representativeness, and source integrity.

  • Outcomes are audited across age, gender, race, disability, and location parameters.

5.2 Monitoring Standards:

  • Real-time monitoring for model drift, discriminatory patterns, and unintended consequences.

  • Flagging systems notify internal review panels if thresholds are breached.

5.3 Independent Review:

  • External third-party audits and peer reviews are conducted annually.

Results are summarised in our annual AI Ethics Impact Report.

6. User Transparency & Redress

6.1 User Rights:

  • Users have the right to know when AI is involved in decisions affecting them.

  • Clinicians and patients may request an explanation of how an automated decision was made.

6.2 Appeals Mechanism:

  • Any perceived discrimination or inaccuracy may be appealed under our Dispute Resolution Mechanism.

Escalation routes include technical and legal arbitration.

7. Governance & Accountability

7.1 Internal Governance Board:

  • A cross-functional AI Ethics Committee oversees compliance, composed of legal, technical, medical, and DEI officers.

7.2 Training & Awareness:

  • All developers, data scientists, and operations staff undergo regular bias and ethics training.

7.3 Records & Evidence:

  • All bias testing, model documentation, and impact reports are retained for 10 years and made available to regulators upon request.

8. Enforcement

8.1 Any breach of this policy — including hidden bias, lack of documentation, or discriminatory design — may result in:

  • Suspension of the AI system in question;

  • Notification to global regulatory authorities;

  • Internal disciplinary measures.

9. Contact

For algorithmic fairness concerns or ethical reviews, contact:

operationsteam@clinicianscheck.com

Date: 27/05/2025

Version: 1.2

“We recognise the gravity of automation in healthcare and hold ourselves to the highest global standard of algorithmic fairness.” — Legal, Ethics & AI Compliance Division