The healthcare industry and academia are in the midst of a technological revolution, with advancements in Artificial Intelligence (AI) and other key technologies offering new opportunities to improve patient outcomes and revolutionise how healthcare is delivered. The Faculty of Information and Communication Technology and Faculty of Health Sciences at the University of Malta held a symposium on "AI technology: Is it the new patient source to go for information?" which discussed the potential benefits and challenges of using AI-powered chatbots like ChatGPT as a source of patient information.
Course coordinators Dr Conrad Attard and Dr Stephen Lungaro Mifsud gave an overview of the interdisciplinary postgraduate course in Digital Health offered by the university, which provides participants with the necessary technological expertise and an understanding of the health landscape to help and support patients, increase staff productivity, management of patient records, reduce errors, extend conventional care, and influence the future of healthcare Master of Science in Digital Health.
The panel of experts was moderated by Dr Ermira Tartari (Lecturer, Faculty of Health Sciences), and panellists: Dr Gege Gatt (CEO of London-based company EBO an AI enterprise which automates patient engagement. He is a digital entrepreneur, a TEDxspeaker, and IT-Law specialist), Dr John Montebello (registered specialist in family medicine and occupational medicine and president of association of private family doctors), Gavin Schranz (Student in Master of Science in Digital Health, forms part of the team at Med-Tech World, and patient advocate), explored the opportunities presented by embracing AI and other key technologies, the obstacles to innovation in healthcare, and the blockers to the adoption of health tech solutions by patients. They discussed the potential risks of using AI-powered chatbots, such as clinical safety and medico-legal liability, and the importance of safeguarding patient privacy and data management.
The field of healthtech is at the intersection of technology, law, ethics, marketing, and healthcare, and it is an exciting space for the next 20 years. This is because it is where the economic moat will be and where we can offer society a public good healthcare. Health equity is an important part of our future, and technology is crucial in making healthcare more accessible to marginalised groups. AI is a critical component of healthtech, but its capabilities are far broader than we see in ChatGPT. The exponential growth curve and speed with which AI develops means that what is yet to come will make today's technology seem extremely naive and puerile. Healthtech's first objective is patient engagement, empowering patients to be more attuned to their care plans. It also helps overworked healthcare professionals provide better service to society. Healthtech includes many technologies, from patient record systems to automated diagnostic tools to drug discovery.
However, challenges and concerns, such as privacy, must be addressed. Collaboration between experts from different disciplines is necessary to understand these challenges, concerns, and opportunities and to manage the process of developing healthtech.
What are the legal and ethical implications of liability, insurance, and data protection in the context of using ChatGPT for patient-doctor communication?
The principles of law and liability apply when using ChatGPT. When you use ChatGPT, you agree to an ‘adhesion’ contract, the terms and conditions that disclaim all forms of liability and transfer the responsibility onto the user. The patient is responsible for taking on the liability of their actions when interacting with the tool. Open AI, the company behind ChatGPT is not assuming juridical responsibility for the actions of the technology itself and instead passing this on to the user. In fact, the technology clearly states: “ChatGPT may produce inaccurate information about people, places, or facts” However, as the importance of the technology grows within society, the company needs to be accountable and responsible for the technology they bring into the world. Without this, they could be releasing technology that causes harm without any responsibility towards the safety of its users.
It is important to remember that ChatGPT is not a doctor substitute and is not intended to give factual medical advice. Most technologies that use tokenization of language, which predicts the next likely word in a sentence, are not necessarily based on fact or truth. It is crucial to understand the limitations of the technology and not rely on it for critical medical decisions. Entrepreneurs must be aware of liability issues, and companies must be responsible and accountable for their technology. The discussion around liability in this area is ongoing and critical to ensure technology’s safe development and use. Dr Gatt emphasises the importance of companies taking responsibility and accountability for the technology they release. Dr Zammit Montebello highlights the importance of patient confidentiality and data protection and the potential liability issues that could arise if bad advice or flawed information is provided.
Gavin highlighted the potential of ChatGPT for logistical purposes, such as allowing patients to interact with virtual agents, access medical records, and manage appointments. However, caution is needed when using AI for medical advice. AI can facilitate access to healthcare services, particularly for patients with low digital literacy, but it is important to remember that AI in healthcare is a means to an end - to give patients access to health information. The shift from a reactive to a more proactive care model was also discussed. Gavin mentioned how this shift could be attributed to developments in the healthcare industry, where the doctor-patient relationship has evolved from one-way to two-way communication.
Gavin also shared his personal experience, noting that as he has grown older, he has become more empowered to take charge of his health. This shift was attributed to better access to information and knowledge about his health, allowing him to adopt self-management of care. He emphasised the importance of patient engagement and access to information in achieving better clinical outcomes, with AI playing a complementary role in facilitating access to healthcare services.
The future potential of AI in healthcare and the rapid advancements expected in the field
The healthcare industry faces a shortage of healthcare professionals, and AI technology is not expected to replace them. Instead, healthcare professionals will have to evolve, and systems will have to evolve to accommodate the integration of AI. Patient and healthcare professional access to technology is critical, and the systems must be user-friendly, stable, and accessible. The future of work in healthcare will be a blend of AI and humans, where healthcare professionals with AI training and access to AI tool sets will replace those without.
AI technology is not meant to replace the workforce but to identify areas where technology can act superiorly to human equivalents. AI can be used to automate the first assessment stage, but everything afterwards still requires the patient-doctor interface. The introduction of an app in Sub-Saharan Africa has increased access to healthcare, providing significantly accurate diagnoses and treatments for skin cancers and melanomas. The future of healthcare is about the integration of AI and humans, where technology can assist in making healthcare accessible and accurate, improving the population's health.
These contracts are pre-written and presented to users on a "take-it-or-leave-it" basis, meaning that the user has no ability to negotiate or modify the terms. They are common in situations where one party has significantly more bargaining power than the other, such as with software licences, rental agreements, and insurance policies.