Book doctors, shop health and beauty products, and access trusted health content — in 110 languages. All in one place.

Your cart

Your cart is empty

Clinicians Check Global Group

Last updated: 27/05/2025

Executive Summary

CliniciansCheck employs artificial intelligence (AI), machine learning (ML), and algorithmic automation to enhance platform efficiency, scalability, and decision support. This statement defines our commitment to ethical AI governance, transparency, risk mitigation, and user rights, ensuring that algorithms never compromise clinical judgment, user autonomy, or legal compliance.

This policy aligns with the EU Artificial Intelligence Act, OECD AI Principles, UK AI White Paper (2023), US Blueprint for an AI Bill of Rights, and similar global frameworks. It applies across all AI components in our platform—internal or integrated via third-party APIs.

1. Scope of Algorithmic Use

Algorithms may be used to:

  • Suggest clinicians, services, or products based on user input or preferences.

  • Flag duplicate or suspicious seller profiles or irregular platform behavior.

  • Support internal operations, such as triage prioritization, fraud detection, or pricing logic.

  • Assist in automated content moderation or spam filtering.

AI is never used to make clinical diagnoses, replace medical advice, or determine patient eligibility for treatment.

2. Human Oversight & Control (Human-in-the-Loop)

All AI-assisted systems operate under a "human-in-the-loop" governance model.

Final decision-making responsibility rests with:

  • The clinician (in patient-facing interactions)

  • The administrator or platform moderator (in content or seller onboarding)

  • Automated decisions must be reviewable, reversible, and overrideable by authorized human users.

3. Explainability & Transparency

We ensure:

  • Clear disclosure when AI is used in interactions, suggestions, or outputs.

  • Documentation of how and why algorithmic recommendations are made.

  • User access to simplified explanations in plain language—especially where outcomes affect service visibility, pricing, or eligibility.

  • For regulated functions (e.g., pricing tiers, prioritization), we provide algorithmic logic summaries and version control logs.

4. Testing, Audit & Continuous Evaluation

All AI systems undergo robust pre-deployment testing for:

  • Accuracy

  • Relevance

  • Fairness

  • Unintended bias

  • Post-launch, we conduct regular audits, including:

  • Disparate impact testing

  • Input/output traceability

  • Model drift monitoring

Affected users may request a human review of any automated decision affecting them.

5. Bias Mitigation & Fairness Principles

We adhere to algorithmic fairness practices to prevent discrimination based on:

  • Race, ethnicity, gender, age, religion, disability, socioeconomic status, or location.

We prohibit:

  • Proxy bias (e.g., using ZIP/postcode as a socioeconomic proxy).

  • Feedback loops that disadvantage certain users based on previous AI outputs.

We engage independent reviewers and legal advisors to assess fairness regularly.

6. Risk Classification & Governance

All AI systems are classified by risk level:

  • Low-risk: Internal automation (e.g., notification scheduling)

  • Medium-risk: Service suggestions or fraud flagging

  • High-risk: Any AI interfacing with health decisions, identity, or pricing

  • High-risk systems require pre-approval by the Ethics, Legal & Compliance Committee and are subject to additional logging, explainability, and rollback mechanisms.

7. User Rights Regarding AI Decisions

All users have the right to:

  • Be informed when interacting with an AI system.

  • Request a human explanation or review of any automated platform recommendation or moderation action.

  • Opt out of non-essential algorithmic personalization.

  • Request the deletion or correction of data used to inform AI outputs.

This does not apply to medical advice, clinical decisions, or diagnostic outcomes, which are never made by CliniciansCheck. All healthcare decisions must be made directly by licensed clinicians outside the scope of this platform.

We comply with the GDPR’s Article 22, UK DPA, and US algorithmic accountability guidelines, and continue to evolve our systems in line with global transparency and fairness standards.

8. Third-Party AI Systems & Vendor Oversight

All integrated AI tools (e.g., third-party chatbots, LLMs, analytics engines) must:

  • Sign a binding AI Vendor Compliance Agreement

  • Undergo a data privacy and algorithmic ethics review

  • Provide audit logs and fail-safes

CliniciansCheck does not permit black-box AI systems in any mission-critical application.

9. Governance, Documentation & Reporting

All AI systems are inventoried in our Algorithmic Risk Register.

Governance is overseen by the AI Ethics & Compliance Subcommittee, with reporting lines to:

  • The Board of Directors

  • Data Protection Officers

  • Global Legal and Risk Advisory teams

Violations of this policy may result in platform access revocation, legal enforcement, and regulatory notification.

CliniciansCheck Global Group This statement is part of our enterprise-wide commitment to responsible innovation, clinical safety, and long-term ethical leadership.