fbpx
Connect with us

Insight

Future-proofing healthcare with responsible AI

Avatar photo

Published

on

With its rapid evolution, it’s time to look at ways AI can better support clinicians for a more responsible future, says KOSA CEO Layla Li…

AI is not just a tool that is made for helping in diagnosis; it is also a tool for detecting problems at an early stage, with more detailed analysis and aiding clinicians in providing a second opinion on difficult cases. 

Now, it has come to the time when it is important to think about how AI can be used in ways that have a positive impact, not only within healthcare but on society in general.

To do so, someone needs to take the responsibility in their hands for the AI systems that are created and the technology behind it to promote equality and diversity. 

It is by default that this is the right thing to do. But, how easy is it for the companies that build and use AI to make sure they abide by ethical and responsible principles?

Causing “distrust” in the system

Similarly, applying AI on high stake health cases may cause ‘distrust’ in the system and ethical questions may be raised as to how and why machine learning algorithms can make a crucial decision on a patient’s life.

There are two angles at this:

Firstly, if AI creator companies want to stay in business and more importantly if they want their business to stay ahead of the curve, they also need to evolve with the fast paced technology adoption, upcoming AI regulations, and its direct proportional impact on society, which is changing for good.  

In fact, more and more official reports say that companies need to step up their game when it comes to AI oversight and governance to ensure their AI is ethical and protect themselves from liability by managing their exposure to risk. But,

  • Who should be responsible for making sure that AI is in good hands?
  • Who should make sure that there is a solid grasp on bias mitigation management on the AI-enabled tools?

Companies “overlook” AI fairness and equality

Big tech companies and AI market leaders are driven by model accuracy and how much revenue it can generate once it is put into production, but most of the time overlook fairness and equality. 

As much as the present AI regulatory guidelines put pressure on companies to address this and avert the potential harms, it’s up to the AI-making executives to decide how to deal with their systems’ societal responsibility.  

According to the Harvard Business Review, AI systems that affect human lives should ideally be assessed by an impartial agency before being implemented to offer clarity and prevent possible harms, as much as environmental impact assessments must be authorised by experts before a building project can start.

Even though there is no official legislation that has taken full effect, several businesses have been developed to fill the gap in model and algorithmic audits and risk assessment. 

Ethical AI database

A recently published, publicly-available, ethical AI database is mapping the companies that operationalise ethical services and help enterprises throughout their AI development, production, and post-production processes with bias detection and mitigation, maintaining data privacy, monitoring, and observability, auditing, and other functions across the board. 

Integrating practices with AI

Secondly, integrating methods and practices within the AI systems that will allow clinicians to develop understanding and appropriate trust in the AI tools they use in their everyday practice.

AI has the potential to radically transform healthcare by improving diagnosis accuracy and increasing efficiencies in diagnosis and treatment pathways. 

This is especially true in medical disciplines which are data-rich and rely on image-based diagnosis, such as radiology.

AI can be used for screening purposes and for detecting problems at an early stage with more detailed analysis of the images.

Further, it can aid clinicians in providing a second opinion on difficult cases.

At first, applying AI in deep neural networks on high stake cases such as in medical imaging, may cause ‘distrust’ in the system and ethical questions may be raised as to how and why machine learning algorithms can make a crucial decision on a patient’s life.

Because of these reasons, the responsible AI methods and practices are put in place and will allow clinicians to trust the AI tools they use.

Emphasis should be placed on the importance of human AI collaboration, powered by responsible AI principles, not only through clinicians understanding and appropriately trusting the AI system, but also the ease to comply with the legislation of the EU’s GDPR and regulations of the NHSX in the UK, FDA in the USA, etc.

Responsible AI as active practicing

The ethical AI ecosystem, as can be seen in the picture above, is categorised into different groups according to which companies can solve and how they can differentiate themselves.

What’s interesting is that this field is still in its nascent stages and it seems that there are quite a few players. AI developing businesses have a challenging time deciding what to do first (data auditing, compliance check, model monitor, etc.), especially when having such a service is still not a norm.

In such cases, some actors take on a multidimensional approach to providing support throughout each stage of the machine learning lifecycle.

An example is KOSA AI, a Dutch based ML platform, that builds AI governance software to audit, explain, monitor, mitigate AI bias and harm, and check for compliance for companies using AI at scale. 

 Seems like a lot to have on one plate, but the issues we are facing today do need multifaceted solutions that address and mitigate the root causes.

The decisions that AI algorithms make can affect different aspects of our everyday lives, especially in the healthcare field.. So, helping companies frame the problem and design their AI models around providing solutions to it is crucial in this stage of the AI industry development.

This is just a start, but certainly not enough. Practicing responsible AI at every step of the development and production processes to gain better insights into the AI and ensuring equity is also very important and part of the larger mission.

Approaching responsible AI  as a strategic business issue at the core of a project, not as an afterthought, should be number one priority for AI builders and users. 

Conclusion

As AI systems become more prevalent in our lives, companies need to think about how they can use them responsibly and for the greater good.

Companies have to take a proactive stance on AI bias and other risks their models may have as they are the ones who are responsible for creating and using the algorithms that have the potential to harm.

They also need to ensure that they are in compliance with regulations and have a governance structure in place which operationalises their responsibility towards mitigating any potential harm.

An easy way to do this is collaborating with a company that specialises in responsible AI and who can assess the technology’s impact and avoid ethical pitfalls before, during, and after a product’s launch.

Such companies will serve as the backbone for assisting you in navigating the inevitable ethical and practical problems presented by AI.

More about Layla Li 

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending stories