Contact Sales

The Power of Ethical AI in Healthcare - Navigating Challenges and Opportunities

Artificial intelligence (AI) is beginning to play a pivotal role in the health and social care sector. From remote patient monitoring and electronic care planning to disease diagnosis, staff rostering, automating repetitive tasks, and analysing large quantities of data, there’s no denying that AI has the power to transform the way we deliver care - making us more efficient and saving us time and money in the process.

While existing laws like the Equality Act and GDPR have some say in how and where AI can be used, the UK has so far only adopted a cross-sector approach to AI regulation, which means it is yet to implement a national regulatory framework. Without appropriate standards and safeguards in place, the integration of AI into health and social care raises significant concerns, particularly in the case of ethical practices.

Social Care Health & Support
7
Holly West-Robinson writer on healthcare

by Holly West-Robinson

Writer on healthcare

Posted 11/10/2024

AI-computer-interface

Understanding the Ethical Imperative

A survey commissioned by the Health Foundation showed that the overall health and social care industry is lacking trust in AI technologies, with 1 in 10 NHS staff and 1 in 6 members of the public believing that AI will worsen quality of care in years to come. This lack of support and overall scepticism stems from data and privacy concerns, the risk of AI bias, and the lack of transparency in clinical decision-making. As a result, deploying AI in healthcare in an ethical way has become a moral challenge for regulators and policymakers.  

Bias Data and Hallucinations

One of the biggest dangers of AI is its potential to compromise human judgement. This is an enormous risk for clinical settings, whereby a healthcare professional may make decisions according to the sole discretion of the AI without sufficient review or consideration of the person’s medical history or the long-term impact on their health.

Compounding this is AI’s potential to “hallucinate”. This occurs when an AI model produces information that is false or misleading yet is presented as a fact. For example, generative AI models like ChatGPT can produce stats, dates, and other forms of content that are entirely made up, yet appear seemingly coherent and logical. This potential to produce false algorithms and information raises major concerns of its approval for use in clinical settings.

“If the data that you’re generating your solutions on has bias in its actual data, then it’s likely to have bias in its answer.” – Alan Payne, Product and Engineering Director at Access. 

Privacy and Security

AI’s place in health and social care already raises concerns about data security and privacy in terms of training, i.e. how it processes and presents datasets and how it determines outcomes. Internet fads such as deep fakes and using AI to scour social media for face mapping highlights the ease in which AI could expose people to risks and fraud.

For instance, an AI system could mimic a doctor’s voice or appearance to deceive health and social care providers or the people they care for, potentially leading to fraudulent activities like manipulating data, falsifying documents, and creating fake prescriptions. In addition, personal health information acquired through AI-powered facial recognition tools on social media could be exploited, compromising patient confidentiality and trust in healthcare systems.

Replacing Human Interaction

AI’s role in health and social care prompts other ethical considerations. At the Digital Health Show this year, SomX’s CEO, James Somauroo gave a compelling talk on AI’s ability to mimic human emotion, posing the big question of “If AI can do everything, should it?”

He believes that while AI should handle repetitive tasks like administrative work, diagnostics, and certain communication elements, he believes that empathy—a core aspect of care delivery—should be left to humans.

"Without true, biological empathy there's a risk of manipulation and unethical behaviours because unlike with humans, we don't know the motivation of AI,” said Somauroo.

The question around whether AI should take on sensitive care roles, such as addressing loneliness in elderly individuals or providing ChatGPT-style therapy for mental health issues, was also highlighted as a matter of concern, as these roles could significantly diminish meaningful human interaction and care. 

"If we are ok with AI giving us fake cognitive empathy at the point of human suffering, that is a serious decision we have made as a community, and one that erodes the fabric of what healthcare actually is,” he added.

 

Collaborative Validation

Another challenge arising from AI solutions comes from the fact they are often created solely by engineers without adequate input from clinicians. If an algorithm is tested and performs as good as or better than a standard set for a health or social care professional, normally there are grounds for it to be implemented, however clinical validation is still necessary.

This is why a system that relies on the expertise of clinicians, software engineers, product engineers, data scientists, compliance teams and one that’s measured against set key performance indicators (KPIs) is paramount in helping AI reach its full potential. If the system supporting the AI isn’t established correctly, the rules the AI will operate from will be flawed, leading to higher risk of medical errors, misdiagnoses and misinterpretation of data.

Business-peeople-talking-on-rooftop

How Does the UK Plan to Regulate AI?

Following in the EU’s footsteps with the drafting of the AI Act, the UK begun developing its own legal framework for AI systems in a White Paper in March 2023, with many of the principles echoing the EU’s attitude towards governance around privacy, data handling and the safeguarding of human rights.

 A “context-based approach” that focuses on cross-sectoral risk monitoring rather than a one-rule-for-all approach is likely to be adopted, allowing some AI systems and tech firms to be exempt from certain regulations.

Other focus points in the White Paper included:

  • Engaging with experts to develop interventions for advanced AI systems.
  • Promoting AI opportunities and tackling risks associated with autonomy, misuse, and societal harm.
  • Building a central function to drive coherence in the government’s approach to regulation.
  • Encouraging effective AI adoption and providing industry support and guidance to workers.
  • Supporting collaboration on AI governance on an international scale.

Though it’s still under review, the government finally published its response to the White Paper on February 6th, 2024. So far, the proposed approach has received backing from AI tech giants like Microsoft, Open AI, Anthropic, and Google DeepMind, plus leading AI safety experts and developers.

“The technology is rapidly developing, and the risks and most appropriate mitigations, are still not fully understood,” said the Department of Science, Innovation and Technology (DSIT) in a press release.

“The UK government will not rush to legislate, or risk implementing ‘quick-fix’ rules that would soon become outdated or ineffective. Instead, the government’s context-based approach means existing regulators are empowered to address AI risks in a targeted way.”

A budget of £10 million has been set aside to support research and various safety projects related to AI. The government fund will assist regulators in the development of new technologies designed to spot and address risks connected to AI in the public sector, i.e. healthcare, finance, education, etc. 

The Information Commissioner’s Office has also updated it's Guidance on AI and Data Protection since the White Paper response, which clarifies the requirements for AI fair practices, the handling of personal data, and how compliance will be enforced.

In addition, the National Institute for Health and Care Excellence (NICE), the Health Research Authority, the Medicines and Healthcare Products Regulatory Agency (MHRA), the Care Quality Commission (CQC), and the NHS AI Lab, published the AI and Digital Regulations Service in 2023, a list of strategies and guidelines to support the use of AI and digital technologies in health and social care. 

While the UK government will look to build on these regulations as the AI landscape continues to expand, the agile regulatory framework will enable quick responses from regulators when new risks emerge, fostering innovation and growth opportunities for developers of AI tools and technology within the UK.

Potential Solutions to Integrate AI into Health and Social Care Safely

Overcoming the obstacles that inhibit AI involves implementing ethical guidelines, structured training and interpretability, plus ethical auditing. AI’s ability to “think” and make judgements are only as good as the humans that programme it to work in this way; it cannot put boundaries in place by itself. That’s why ethical considerations and regulatory frameworks are vital for the responsible development, validation, and deployment of AI in the health and social care sector. Other potential solutions include:

Safeguarding Data and Privacy

For AI to comply with data protection regulations and prove ethically correct, it needs to revolve around fairness, accountability, and transparency. While it is possible to turn off chat histories to prevent Large Language Models (LLM) like ChatGPT from training its models on conversations with users, generative AI should be treated just like any other third-party software.

This means developing a robust security strategy that outlines the types of activities and tasks it should be used for, the types of AI programs that can be used safely, and who should have access to it within an organisation. This strategy should also involve implementing strong encryption protocols and multi-factor authentication to protect patient data at all stages—collection, processing, and storage.

AI governance frameworks should also enforce ethical data usage, and advanced auditing tools to detect and prevent fraudulent activity. The Access Group is working with the Institute of Ethics in AI at Oxford University, along with Rueben College and Casson Consulting in developing a set of guiding principles for the safe and ethical deployment of AI in health and social care. These five principles are:

  • Human Rights and Well-Being Law: Generative AI must prioritise human rights and wellbeing to enhance care quality and dignity in health and social care.
  • Safety and Responsibility Law: AI should be used safely and responsibility to ensure data security, bias prevention, and informed consent with regular safety checks and output validation.
  • Support and Augmentation Law: Generative AI should enhance human caregiving and act as a support pillar in clinical settings by promoting autonomy and wellbeing for both care recipients and caregivers.
  • Transparency and Collaboration Law: Generative AI should be transparent and foster collaborative frameworks and engagement with stakeholders, regulators, caregivers, and those receiving care.
  • Continuous Learning and Adaptation Law: Adaptation in AI use, ongoing learning, and sector-wide training should be conducted regularly to ensure considerations of ethics and development of best practices.

Hybrid Collaboration Protocols

To prevent biased AI outputs from influencing decision-making, collaboration protocols between AI and humans would ensure that AI-generated recommendations or outputs are subject to mandatory human oversight, particularly for decisions that could be detrimental to a person’s care—AI should be used to support, not replace, clinical decisions.

In addition, integrating regular validation of AI outputs, combined with real-time cross-checking by healthcare professionals, would mitigate the risk of AI hallucinations and ensure that medical history and long-term health impacts are always considered before final decisions are made.

Human in-the-loop Systems

A potential solution to the concern of AI replacing human interaction in healthcare is to ensure that AI is used as a supportive tool rather than a replacement for human empathy and decision-making.

One approach could be the integration of "human-in-the-loop" systems, where AI handles administrative and repetitive tasks but human professionals are responsible for people-facing roles, particularly in emotionally sensitive situations. This would ensure AI enhances care delivery efficiency while preserving the essential human connection that is critical for peoples’ well-being.

With all the above measures in place, data security would be enhanced, the transparency of machine biases would improve, and the wellbeing of care givers and care recipients would be better protected. This would result in more trustworthy AI-based analysis, improving the justification of AI-driven decisions within medical domains. Nonetheless, regulatory issues that aren’t black and white should continue to be approached with caution, with deep research into potential benefits and risks prior to making decisions.

AI combatting loneliness

Access’ AI-Powered Solutions to Support Health and Social Care

Artificial Intelligence is a key enabler in redefining the relationship between health and social care to provide better support and outcomes for citizens and the workforce across the whole care continuum.” - Jardine Barrington-Cook, Director of Integrated Care in HSC at Access.

Time is public enemy number one in health and social care. Not only does it cost the industry unfathomable amounts of money each year, it’s also a fundamental, non-renewable resource that is gone for good once spent. However, thanks to modern solutions like The Access Group’s generative AI software experience, Evo, and our robust integrated care platform ‘IntelliCare’, AI now has the ability to give time back by removing the obstacles that cause delays in clinical workflows, hinder productivity, and waste precious hours through unnecessary processes. 

Access Evo

It's thanks to Evo’s intelligent AI automation and its ability to integrate seamlessly with existing tools and systems that health and social care providers are now empowered to tackle both routine tasks and major strategic projects with complete ease and confidence. Whether this is answering critical questions, compiling reports, or managing day-to-day operations, Access Evo leverages the power of AI to transform even the most laborious tasks into manageable actions that make a difference to you and the people you care for.

It's not just about efficiency: it’s about utilising the existing technology available and enhancing it with something simple, secure and cost effective; without the need to invest in dozens of add-ons, new products or platforms that complicate operations and push up expenditure. With a 3-tier security model, secure data handling and scalable functionality, Evo harnesses the power of digital to help businesses grow organically while enhancing their capabilities and operations in the process.

Find out how Access Evo and our other groundbreaking tools can help you reduce friction and transform care delivery within your organisation.

Access IntelliCare

IntelliCare is an AI-assisted, operational improvement platform that fully integrates with clinical systems and brings a unique single-feed functionality to join up the processes, systems and people within the health and social care system. Through enhanced visibility and patient activity, IntelliCare supports a meaningful shift to prevention-focused care, whereby a holistic and collaborative approach is required to ensure effective delivery.  

The platform has been co-designed with clinicians to deliver benefits across the board, ranging from cost savings, improved efficiencies, higher levels of satisfaction and better patient outcomes, with clearly defined pathways that mitigate the challenges so often seen and heard about in the current health and social care landscape.

With the help of advanced analytics and automated testing coupled with intuitive tools like ambient dictation and Microsoft’s AI assistant ‘Copilot’, IntelliCare gives clinicians greater control, confidence, and visibility that far exceeds the capabilities of shared care records.

For a personal consultation to discuss how IntelliCare can enhance your organisation’s operations, watch a demo or get in touch.  

Our Commitment to Ethical AI

AI’s potential in healthcare is exciting, particularly for its abilities in data analysis, predictive capabilities, and clinical support. However, this promise comes with the challenge of addressing ethical concerns, especially in areas like patient care and data management. These considerations include how AI handles sensitive information, its impact on decision-making, and ensuring its use does not compromise patient integrity or privacy.

Access’ Head of Product and Engineering, Alan Payne, is highly passionate about this topic and has recorded an entire mini-series dedicated to AI’s evolving role in health and social care, touching on key points such as solutions to problems and challenges, the safe and ethical deployment of AI in clinical settings, and how it can be used to enhance the workflows of trusts and health organisations. Watch the full series here.

Conclusion

With the UK AI market expected to grow beyond £20 billion by the end of 2030, its potential to transform the health and social care sector and numerous other industries is evident. However, issues around ethics, regulation and data security will continue to limit this evolution until we get the formula right.

A regulatory framework may serve as a safety net initially, but governments know that too many blockades stifle innovation, prevent service improvements, and obstruct cash flows. This is why an agile, adaptive approach to AI is the only approach that will see it thrive and make any real difference.

AI is no longer a vision of the distant future but an essential component of both our current reality and what lies ahead. Rather than fight it we need to ride these tidal waves of change and innovation and embrace the possibilities AI creates for health and social care - because in the now infamous words of the Borg - "Resistance is Futile."

Access Evo CTA Banner

Holly West-Robinson writer on healthcare

By Holly West-Robinson

Writer on healthcare

Holly is a Digital Content Writer for Access Group's Health and Social Care division.

Passionate about the transformative power of technology, her writing is centred on digital solutions like virtual wards and integrated care systems, which she believes are essential to prevention and the future of healthcare.