Download PDF version Contact company

The UK’s carers should be cautious of using unregulated AI bots, according to researchers from Oxford University.

Study findings

A pilot study by academics at the University of Oxford found some care providers had been using generative AI chatbots such as ChatGPT and Bard to create care plans for people receiving care.

That presents a potential risk to patient confidentiality, according to Dr. Caroline Green, an early career research fellow at the Institute for Ethics in AI at Oxford, who surveyed care organizations for the study.

Substandard AI-generated care plan

She said carers might act on faulty or biased information and inadvertently cause harm

Speaking to The Guardian, Dr. Caroline Green said, “If you put any type of personal data into (a generative AI chatbot), that data is used to train the language model. That personal data could be generated and revealed to somebody else.”

She said carers might act on faulty or biased information and inadvertently cause harm, and an AI-generated care plan might be substandard.

Revisit care plans

But there were also potential benefits to AI, Dr. Caroline Green adds, “It could help with this administrative heavy work and allow people to revisit care plans more often. At the moment, I wouldn’t encourage anyone to do that, but there are organizations working on creating apps and websites to do exactly that.

Resisting mass AI adoption

Cybersecurity expert, Oseloka Obiora, CTO, RiverSafe said, “Health professionals must resist the temptation to embrace mass AI adoption without proper protocols and cyber protection in place."

Oseloka Obiora adds, "Tight budgets and heavy workloads are no excuse for allowing unchecked technology to run riot, triggering potential breaches of privacy, personal data, and inaccurate outcomes for patients.

Using generative AI responsibly

Green, who convened the meeting, said they intended to create a good practice guide within six months

While people who work in creative industries are worried about the possibility of being replaced by AI, in social care there are about 1.6 million workers and 152,000 vacancies, with 5.7 million unpaid carers looking after relatives, friends, or neighbors.

In February 2024, 30 social care organizations, including the National Care Association, Skills for Care, Adass, and Scottish Care met at Reuben College to discuss how to use generative AI responsibly. Green, who convened the meeting, said they intended to create a good practice guide within six months and hoped to work with the CQC and the Department for Health and Social Care.

AI in healthcare

Stuart Munton, Chief for Group Delivery at AND Digital said, “With frontline staff under pressure, the case for adopting AI in the healthcare sector is compelling, but this research is another reminder of the risks associated with unchecked technology being allowed to make recommendations for patients."

Stuart Munton adds, "The truth is that on balance AI will bring huge benefits to health professionals in the long term, but this demand needs to be juggled alongside mitigating error, cyber risks, and privacy concerns.

Download PDF version Download PDF version

In case you missed it

How Can The Security Industry Contribute To Protecting The Environment?
How Can The Security Industry Contribute To Protecting The Environment?

When it comes to protecting the environment, the security industry has historically been perched on the sidelines. For instance, the amount of electricity that physical security sy...

Comprehensive K12 Security
Comprehensive K12 Security

For K12 education pioneers, embarking on a journey to upgrade security controls can present a myriad of questions about finding the best-fit solutions and overcoming funding hurdle...

Choosing The Right Fingerprint Capture Technology
Choosing The Right Fingerprint Capture Technology

Choosing the appropriate fingerprint technology for a given application is dependent on factors including the required level of security and matching accuracy, the desired capabili...