Background: Clinical artificial intelligence (Clinical AI) is now a commonplace tool being used for administrative, diagnostic and therapeutic purposes. However, it gives rise to novel and complex liability questions that will involve multiple actors including physicians, hospitals, suppliers, vendors and more. Objective: The paper compares the relevant laws in nine jurisdictions in the MENA region - including the United Arab Emirates (UAE) and the other GCC states alongside Algeria, Egypt, and Morocco - to understand how evolving legislation, regulation and ethical codes impact fault and damages. Methods: The Clinical AI ecosystem roles are defined, and core legal duties related to consent, data protection, patient rights and safety oversight mapped. The text critiqued (i) the hospital’s selection, governance, training, and ongoing post-deployment monitoring responsibilities; (ii) the accountability of clinicians in mixed “decision-support” workflows, including disclosure of the use of AI to patients and standard-of-care implications of accepting or override AI output; and (iii) supplier/vendor exposure through product liability, performance warranties, cybersecurity obligations, licensing/clearance and ongoing post-market updates and supervision. Results: The comparative section shows significant differences between MENA liability models. It also highlights the need for facility-wide incident response proper process that encompasses model drift, cyber events and data-quality failures, which more often cause the harm than algorithm malfunction. Conclusion: In conclusion, improved risk allocation through governance, documentation, contracting and incident reporting, can aid in reducing uncertainty, enhancing patient safety, and facilitating more scalable cross-border deployment of Clinical AI in the region.
Key words: Clinical artificial intelligence; medical liability; product liability; MENA health regulation; risk allocation and incident response.
|