The world continues to become more automated, but how much responsibility should you give AI within your pharmacy?
Artificial intelligence already plays a larger role in pharmacy than many individuals may realize: From automated text messaging programs to automated inventory solutions and adherence packaging that utilizes automation, the pharmacy field has been rapidly expanding its incorporation of artificial intelligence in recent years.
A recent study published in the journal Innovations in Pharmacy reviewed even more functions of AI in the field.1 “During the past few years, a considerable amount of increasing interest in the uses of AI technology has been identified for analyzing as well as interpreting some important fields of pharmacy like drug discovery, dosage form designing, polypharmacology, and hospital pharmacy,” ahe authors wrote.
And now, just in 2023 alone, there has been a concerted effort in to improve AI’s capabilities, an effort that carries over into the health care field as a whole. But in an industry like community pharmacy—where face-to-face interactions have always been an important part of the job, and where customers develop trust with their pharmacists—how much of that trust should pharmacists feel confident putting into AI?
In 2022, the European Parliament’s Panel for the Future of Science and Technology explored both the benefits and potential risks of using AI in health care. The list of risks included2:
· Patient harm due to AI errors,
· The misuse of medical AI tools,
· Bias in AI and the perpetuation of existing inequities,
· Lack of transparency,
· Privacy and security issues,
· Gaps in accountability, and
· Obstacles in implementation.
Though these risks are for the whole of health care, most—if not all of them—can be applied to pharmacy specifically. From that study:
“As with most health technologies, there is a risk for human error and human misuse with medical AI. Even when the developed AI algorithms are accurate and robust, they are dependent on the way they are used in practice by the end-users, including clinicians, healthcare professionals, and patients. Incorrect usage of AI tools can result in incorrect medical assessment and decision making and subsequently in potential harm for the patient.”
Though tasks like text message alerts and inventory are already standard practice, implementing other AI technologies without properly understanding them might be tempting. But that can lead to dangerous outcomes.
“It is not enough for clinicians and the general public to have access to medical AI tools, but it is also necessary for them to understand how and when to use these technologies.”
Making sure that customers are aware of what services are being spearheaded by artificial intelligence may become more and more important as the technology becomes more prevalent, as well. With AI voice and writing becoming more and more indistinguishable from humans, informing patients that interactions or front-end business is being done with AI is the correct ethical business practice.
FDA’s Recent Exemptions: What Do They Mean as We Finalize DSCSA Implementation?
October 31st 2024Kala Shankle, Vice President of Regulatory Affairs with the Healthcare Distribution Alliance, and Ilisa Bernstein, President of Bernstein Rx Solutions, LLC, discussed recent developments regarding the Drug Supply Chain Security Act.