fbpx
Connect with us

AI

An ethical prescription: how AI can augment human perception with inclusivity at its heart

By Piotr Orzechowski, CEO and founder of Infermedica.

Published

on

Piotr Orzechowski, CEO of Infermedica.

Thanks to advances in artificial intelligence (AI) and machine learning (ML), computer systems can help patients in making the right decisions and support healthcare professionals in diagnosing more conditions more accurately and quickly, saving human lives in the process.

However, even within the same country, access to healthcare can vary widely. Take the US, for instance.

More like a continent than a country, we can look to popular culture to see how the story is wildly different from place to place, with the working conditions of ER’s doctors and nurses treating underprivileged patients in Chicago, Illinois, worlds away from the research-rich medical experts in Grey’s Anatomy, set sky-high in Seattle.

These might be works of fiction, but this fact remains true, the healthcare industry still has a way to go to become truly inclusive. This is where technology comes in — it can help plug gaps in human resources, helping make healthcare accessible and convenient for everyone. How can AI be used ethically and effectively to help reach these better healthcare outcomes?

AI is not a silver bullet: Humans must be in control 

Of course, AI is not a silver bullet and biases have been found in some AI-powered tools — after all, it’s humans who create algorithms. Bias is complex and comes in all shapes and sizes and to create a product without inherent bias is difficult, but it is something that we should all aim for.

There are many that are present across industries such as healthtech for instance, including confirmation bias, cultural bias and self-serving bias.

In machine-based learning, it’s common that datasets could lead to biases due to a lack of diversity and the way the algorithm is developed. In the context of healthcare, it is important to ensure that these biases are addressed to establish fair accessibility from the start.

Undeniably then, AI needs supervision. By incorporating doctor-led curation and validation of AI to work with machine-based learning, methodology can be safe from biases that could appear, and the data risks mitigated effectively.

What’s more, AI can remove human-led biases. That’s why humans must have oversight of AI, work in tandem and understand the value of it, to be in full control of medical decisions and health care systems.

Human control refers not only to how the system is built, but how it is designed too. In many instances, doctors perform the actual supervision of the AI system and curate the outcomes before they’re presented to a patient.

That’s why clinical evaluation must be performed in multiple settings to ensure that the AI solution performs properly in all groups, especially taking into account populations by age, sex, ethnic origin and location.

It’s only by using extensive human curation and evidence-based literature as a basis of knowledge for AI, that we can ensure that decision-making is not only effective but ethical.

As such, there are examples of where patient-physician communication can become more meaningful by working with AI technology. For instance, an AI-driven consultative intake form can be used to not only streamline data flow from patient to doctor and automate visit notes, reducing the amount of time spent at appointments, but it can empower patients ahead of the visit, provide doctors with essential patient data and efficiency across healthcare.

Guidelines must be followed for ethical diagnosis

Within diagnosis technology, biases have the potential to impact the way clinicians do their jobs, patient outcomes, and ultimately their value to health systems.

AI ethics is therefore important to ensure accuracy, efficacy, and safety in the use of the technology users. One of the first hurdles is making sure that people are trained to use AI properly, which can be challenging as systems can vary across companies and industries. Businesses that rely on AI should ensure that they are used under appropriate conditions by trained medical personnel too.

There are practical steps that can be taken to ensure that guidelines are being followed, which is essential to ensure that organisations can be deemed ethical.

For example, it’s important that models are explainable, systems can be continually investigated so decisions can be potentially overridden, and that they have the ability to train and improve systems overtime without breaching patients’ privacy. However, not all systems allow that and are very opaque which can restrict the inclusivity of the technology.

What’s more, AI developers need to meet all certifications and standards, and — if they want to go the extra mile — maintain an external validation board that continuously reviews and benchmarks.

For example, it’s important to comply with data processing regulations such as GDPR and HIPAA and medical device regulations, such as the UE’s MDR directive, and process only the most necessary patient data.

Transparency is key

AI developers should also be transparent about how products are designed and the data they use to function before they’re ready for diagnosis and treatment.

This means that AI developers need to provide evidence of accuracy and insight into their technology development process in the form of clinical tests, documentation, data samples, and peer review studies.

Systems should not be a black box. Every change in behaviour must be reviewed and approved by the physician building the system.

By basing AI on probabilistic modelling too, there are advantages such as traceable outcomes, easy debuggability, and provenance.

AI can, and must, be designed to encourage inclusivity and equality in healthcare. By ensuring that diverse and rich data sets are used for training and that the data does not include any implied biases — curated by a diverse set of medical experts who control decision-making — AI can augment human perception with inclusivity at heart.

In practice, this means ensuring guidelines are followed and full transparency of processes and systems.

By doing this, AI can support human resources, ensure quality healthcare is accessible to more people and relieve pressure on the healthcare system. All of this helps improve patient outcomes by allowing for streamlined service and more meaningful interactions.

 

Continue Reading
1 Comment

1 Comment

  1. Pingback: Travelling safely and legally with prescription medicines

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending stories