Regulating the Black Box of Medical AI

Share

Two scenes, 5000 miles and 10 months apart:

April 2016 ~ One of the largest suppliers of electronic healthcare record systems to General Practitioners in the UK realise that an algorithm used to estimate patients’ risk of heart disease or stroke has been coded incorrectly. As a result of this simple programming error, thousands of patients were given incorrect information about their risk, potentially receiving unnecessary drugs or missing out on preventative treatment.

Feb 2017 ~ Researchers from Stanford University publish a research letter in Nature describing the use of a deep convolutional neural network (a type of machine learning algorithm that takes inspiration from the layered structure of the part of the brain responsible for vision) to diagnose skin cancer. By training the algorithm on 130000 pictures of assorted spots, rashes, blemishes and skin lumps, the neural network was able to diagnose skin cancers with the same level of accuracy as qualified dermatologists. In principle, the system could be used to automatically diagnose likely skin cancers from a smartphone snap: skin selfie to diagnosis in seconds.

 


 

These examples illustrate both the potential and the pitfalls of a future where healthcare is increasingly automated. Among the many questions raised by the rapid development of machine learning techniques and applications is what this means for medical regulation. Healthcare services are, for obvious reasons, highly regulated and subject to a wide range of legal and regulatory frameworks to ensure their quality and safety. Patients expect that the doctors, nurses and therapists providing their care to have the necessary skills, knowledge and qualifications, that the drugs they take are manufactured and prescribed safely, that medical equipment works, that lab results are accurate and that errors and accidents during care will be prevented and dealt with.

All these are subject to a hugely diverse range of laws and regulations. From a UK (or more specifically, England) perspective, the three areas of healthcare regulation most most likely to be impacted by increasing automation are:

The regulation of medical devices by the Medicines and Healthcare products Regulatory Agency (MHRA). As well as regulating drugs and medical devices (e.g. cardiac stents, joint replacements), the MHRA is the statutory regulator of medical  software and apps. Software involved in clinical decision making (think software that helps calculate drug dosages or makes treatment recommendations, but not software used for booking appointments) is regulated as a medical device. The MHRA has a nice summary of what this means for developers and clinicians : in short, low risk applications are managed through a self certification approach, but higher risk applications need to be independently validated by an independent organisation . The approach is closely linked to European Union regulations (specifically MEDEV 2.1/6) and the process of CE certification (which incidentally means that whatever happens as a result of Brexit is going to have major implications for the regulation of medical AI in the UK). Most of the interest in machine learning based applications is in the field of diagnostics (in particular radiology, opthamology and dermatology), and these would almost certainly be regulated as medical devices and require CE certification.

The regulation of the providers of healthcare services by the Care Quality Commission (CQC). The CQC is the statutory regulator of healthcare services (e.g. hospitals, GPs, care homes, community services, dentists) in England. This includes not only traditional healthcare services but also providers of online based consultations or medical services such as Babylon. Providers are assessed on a range of criteria to ensure that the services they provide are safe, effective, caring, responsive and well-led. It is not clear if automated healthcare services would fall under the remit of the CQC, but this seems likely, especially if these services were being purchased on behalf of patients by the NHS. Certainly, the use of AI by traditional providers would also potentially be of interest to the CQC – how hospitals demonstrate for example that their machine learning based radiology system are safe and accurate for example.

The regulation of medical professionals. In the UK, the main professional regulators are the General Medical Council (for doctors), and the Nursing and Midwifery Council (nurses and midwives). These bodies set the standards of training, behaviour, and practice expected of healthcare professionals. There are going to be some difficult challenges in how these standards are interpreted in an age of increasing medical automation. For example, where do the professional responsibilities of a doctor begin and end when she is following a treatment plan recommended by a machine learning algorithm? One of the issues here is that in contrast with a traditional protocol or clinical guideline, the rationale of how and why a machine learning algorithm has generated a particular output or recommendation can be very hard (or even impossible) to determine. How can doctors and nurses assure themselves, and in turn their professional regulators, that they have acted responsibly when they are making use of algorithms that are essentially black boxes whose internal processes and decision making are hidden? Current guidelines for clinicians about the use of medical apps are linked to MHRA regulation and CE certification, and this could provide a blueprint for future regulation of clinicians’ use of medical AI technology.

This is almost certainly an over-simplification of the many regulatory issues surrounding the implementation of machine learning applications in healthcare. And I haven’t even mentioned any of the issues relating to legal liability and litigation (I am not a lawyer) but these are likely going to be at least as complex (though you never know, maybe we will one day have our medical algorithms being taken to virtual electronic court by the legal AIs).

Although this seems like a dry topic, getting the regulatory frameworks right is important. A “wild west” approach to healthcare, free from any regulatory oversight is unlikely to be acceptable to society and could lead to a great deal of harm (a digital equivalent of healthcare in the era of blood letting, snake oil salesmen and quakery). At the same time, poorly designed regulation may fail to provide the intended protection to patients, generate perverse incentives and unexpected harms, and stifle innovation and implementation. I don’t know what the ideal regulatory framework for medical AI looks like, but there are few things that we could be doing now to increase the chance that we get this right:

  1. Look across and share learning with other industries also being changed by automation. What can healthcare learn from regulatory approaches to machine learning and automation in say, transportation, legal services or fintech?
  2. Develop better ways to unpack, inspect and understand the black box of algorithms. For complicated neural networks this is at present exceptionally hard, if not impossible (imagine for example using a brain CT scan to explain how your brain creates the visual perception of a beautiful sunset). Making artificial neural networks explainable is however an active area of research and would help immensely in developing regulatory frameworks for medical AI
  3. Develop approaches to measuring and evaluating the quality and safety of medical AI applications. This could involve extending existing post marketing surveillance and reporting systems to include medical AI, and setting up registries and audits to measure the real world outcomes of patients managed using these systems. We might need to think creatively about how to capture this type of data – it might for example be useful to capture a record of the “mind state” of the machine learning algorithm at the time that it made a particular recommendation or decision (if an algorithm is continually being updated and learning from new data, it would be important to be able to know if it made a lot of dangerous mistakes one Tuesday afternoon for example)
  4. Start thinking about these regulatory issues sooner rather than later. It would be much better if (proportionate, wise) regulation develops alongside technical innovation and implementation and not only in response to some major quality or safety scandal.

If you want to read in more detail about the issues of licensing AI applications in healthcare, I can heartily recommend this blog by Dr Mark Wandle.

 

 

Share