Book doctors, shop health and beauty products, and access trusted health content — in 110 languages. All in one place.

Your cart

Your cart is empty

Effective Date: May 2025 Last Reviewed: May 2025 Jurisdictional Coverage: Global (including UK, EU, USA, Canada, Australia, UAE, GCC, Brazil, and all territories where CliniciansCheck operates)

1. Introduction

CliniciansCheck Limited ("CliniciansCheck," "we," "our," or "us") is committed to full transparency regarding the deployment and governance of Artificial Intelligence ("AI") systems on our platform, including our proprietary AI assistant, "Gigi."

This AI Use Disclosure Statement is a legally binding document designed to:

Provide users with clear, accessible, and accurate information about how AI is used.

Ensure compliance with all applicable global regulations.

Limit our legal liability and protect the integrity of our systems.

Use of the CliniciansCheck platform constitutes explicit acknowledgment and acceptance of this AI Use Disclosure Statement.

2. Scope of AI Use

CliniciansCheck employs AI tools for administrative, informational, and automation purposes only. These tools include, but are not limited to:

Gigi, our AI assistant for platform navigation, content generation, and summary insights.

Automated form population, search refinement, content tagging, and data harmonization.

Drafting of generic clinician or service descriptions based on structured inputs.

Workflow enhancements such as appointment routing, file preparation, anonymization, and system notifications.

We do not permit AI tools to:

  • Diagnose, treat, or triage patients.

  • Replace licensed clinical judgment.

  • Issue legal, regulatory, or medical advice.

= Act as a substitute for professional services.

All AI outputs must be reviewed and confirmed by a qualified human before any decision, booking, or action is taken based on AI input.

3. Legal and Regulatory Compliance

Our AI tools are developed, deployed, and maintained in accordance with the highest global legal standards, including:

  • United Kingdom: UK General Data Protection Regulation (UK GDPR) and Data Protection Act 2018.

  • European Union: EU General Data Protection Regulation (EU GDPR) and the Artificial Intelligence Act (AI Act) 2024, which classifies AI systems based on risk and imposes obligations accordingly.

  • United States: Health Insurance Portability and Accountability Act (HIPAA), California Consumer Privacy Act (CCPA), and California Privacy Rights Act (CPRA). Additionally, state-level regulations such as Utah's SB 149/SB 226 require disclosure of AI use in regulated occupations.

  • Canada: Personal Information Protection and Electronic Documents Act (PIPEDA) and the proposed Artificial Intelligence and Data Act (AIDA).

  • Australia: Privacy Act 1988 and the Australian Human Rights Commission's guidelines on AI.

  • Brazil: Lei Geral de Proteção de Dados (LGPD).

  • United Arab Emirates and Gulf Cooperation Council (GCC): Federal Decree-Law No. 45 of 2021 on the Protection of Personal Data and other regional data protection laws.

  • China: Interim Measures for the Management of Generative AI Services and other relevant regulations.

  • Global Frameworks: OECD AI Principles and the Council of Europe's Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law.

These tools are subject to internal and external audit mechanisms and are continuously reviewed for ethical alignment, jurisdictional compliance, and patient safety.

4. Human Oversight & Accountability

CliniciansCheck employs a rigorous "human-in-the-loop" protocol, which ensures:

All AI outputs are subject to qualified human review prior to user-facing implementation.

Critical decisions, especially those affecting care, data protection, or public communication, are human-controlled.

AI tools function as assistive technology, not autonomous agents.

Responsibility for all final decisions made using the platform rests with the authorized user (e.g., clinician, administrator, patient) and not with the AI.

5. Limitations and Disclaimers

While AI tools are carefully engineered, the following limitations apply:

  • AI may generate content that is out of date, incomplete, or contextually incorrect.

  • AI does not understand nuance, cultural differences, or clinical complexity beyond its programmed logic.

  • We do not guarantee the accuracy, completeness, or appropriateness of any AI-generated text or suggestion.

  • CliniciansCheck disclaims all liability for actions taken solely or primarily based on AI outputs.

We reserve the right to remove, suspend, or alter AI tools without prior notice if deemed necessary to protect users, uphold compliance, or mitigate emerging risk.

6. Cultural, Language & Accessibility Considerations

Given our platform’s operation in 110+ languages and regions, our AI tools are:

  • Designed to recognize cultural, linguistic, and regional sensitivities to the extent technologically feasible.

  • Routinely reviewed and tested for inclusive language, accessibility (WCAG 2.1 AA), and multilingual readability.

  • Not substitutes for native-language or culturally tailored services where required.

Users must take care to independently verify all AI-generated outputs within their cultural and legal context.

7. User Rights and Transparency

All users interacting with AI tools on our platform have the right to:

  • Be informed when AI is in use.

  • Request access to AI-generated records relating to their data (where applicable).

  • Opt out of AI interactions where technically possible.

  • Request human-only support or review in relation to key actions or outputs.

Requests may be made through our Data Subject Request Form.

8. Enforcement, Updates & Contact

This AI Use Disclosure is reviewed at least annually and updated as necessary to reflect:

  • Evolving international law.

  • Advancements in AI capability or architecture.

  • Internal audit recommendations.

  • Ethical risk mitigation requirements.

Any breach of this disclosure or misuse of AI tools may result in disciplinary, contractual, or legal action.

9. Explicit Alignment with Global Ethical Frameworks

While the statement references various regional regulations, explicitly aligning with recognized global ethical frameworks can enhance credibility and trust. Consider incorporating principles from:

World Health Organization (WHO) Guidelines: Emphasize the importance of transparency, accountability, and inclusiveness in AI applications in healthcare.

FUTURE-AI Guidelines: Highlight principles such as Fairness, Universality, Traceability, Usability, Robustness, and Explainability to ensure trustworthy AI in healthcare.

By explicitly stating adherence to these frameworks, you demonstrate a commitment to ethical AI deployment beyond regulatory compliance.

10. Detailed Data Governance and Risk Management Protocols

To address potential concerns about data handling and AI decision-making:

  • Data Minimization and Purpose Limitation: Clarify that AI systems access only the data necessary for specific functions, adhering to principles like those outlined in HIPAA's Minimum Necessary Standard.

  • Bias Mitigation Strategies: Detail processes in place to identify and mitigate biases in AI algorithms, ensuring equitable outcomes across diverse populations.

  • Continuous Monitoring: Describe mechanisms for ongoing evaluation of AI performance, including accuracy assessments and updates based on new data or regulatory changes

11. Enhanced User Rights and Transparency Measures

Strengthen user trust by elaborating on rights and transparency:

  • Informed Consent: Ensure users are explicitly informed when AI is involved in their interactions and obtain consent where necessary.

  • Access and Correction Rights: Provide clear procedures for users to access AI-generated data related to them and request corrections if inaccuracies are identified.

  • Opt-Out Provisions: Allow users to opt out of AI-driven processes where feasible, offering alternative human-led options.

12. Clarification on AI's Role in Decision-Making

To prevent misunderstandings about AI capabilities:

  • Decision Support vs. Decision Making: Clearly state that AI serves as a support tool and does not replace professional judgment or make autonomous decisions.

  • Human Oversight: Reiterate that all significant decisions, especially those impacting health outcomes, are reviewed and approved by qualified professionals

13. Regular Review and Update Commitments

Demonstrate adaptability and responsiveness to evolving standards:

  • Scheduled Reviews: Commit to periodic reviews of the AI Use Disclosure Statement to incorporate new regulations, technologies, and ethical considerations.

  • Stakeholder Engagement: Involve diverse stakeholders, including legal experts, ethicists, and user representatives, in the review process to ensure comprehensive perspectives.

14. Explicit Liability Disclaimer

Users accept that any reliance on AI-generated outputs is at their own discretion and risk. CliniciansCheck shall not be held liable for outcomes resulting from user interpretation, implementation, or misuse of AI-generated content, regardless of jurisdiction.

15. AI Vendor & Hosting Statement

Some AI functionalities may be supported by third-party vendors, subject to their own compliance with applicable data protection and AI safety standards. We select vendors based on security credentials, auditability, and lawful processing agreements.

16. AI Breach Notification Clause

In the event of a significant AI system failure or breach affecting user data or outputs, we will take appropriate remediation steps and notify affected users in accordance with applicable legal requirements.

17. Children’s Data and AI

CliniciansCheck does not knowingly use AI to collect or process data from children under 16. Where children’s data is involved, additional safeguards and parental consent mechanisms are applied.

18. Language for AI Training Data

Where applicable, AI models may be refined using de-identified, aggregated platform data, in accordance with global privacy and ethical standards. No identifiable data is used for model training without explicit, lawful user consent.

19. Future-use Clause for Evolving Capabilities

As AI capabilities evolve, we reserve the right to implement new features that enhance safety, efficiency, or personalization. Any material changes will be disclosed via updated policies and subject to human review and legal compliance.

20. Statement of Jurisdictional Deference

Where local law conflicts with this policy, the local law shall prevail to the extent required, without limiting our overarching global compliance obligations.

21. Audit & Certification Note

CliniciansCheck is committed to ongoing third-party audits and voluntary certification schemes in alignment with emerging international AI standards and ISO/IEC 42001.

Contact Us:

Email: operationsteam@clinicianscheck.com

Web: www.clinicianscheck.com

CliniciansCheck Limited, 2 Harley Street, London, W1G 9PA, United Kingdom

CliniciansCheck Limited – Deploying AI Responsibly, Globally, and With Integrity

This revised AI Use Disclosure Statement incorporates comprehensive global legal frameworks and best practices, ensuring that CliniciansCheck remains at the forefront of ethical and compliant AI deployment in healthcare.