Can AI chatbots break the silence? Rethinking men’s mental health support

By Published On: November 19, 2025Last Updated: November 27, 2025
Can AI chatbots break the silence? Rethinking men’s mental health support

By Dr Serufusa Sekidde                     

Years ago, I lost a patient to suicide.

He was bright, outwardly strong, and yet, like so many men, he suffered in silence.

I feel like I missed the warning signs and that’s why that tragedy still weighs heavily on me.

 

Every 19th November, which is International Men’s Day, is a reminder to me that we need urgent action on this silent crisis: men’s mental health.

Men are significantly less likely to seek help for mental health issues, and this reluctance contributes to higher rates of untreated anxiety, depression, and suicide.

The reasons are complex and range from stigma, cultural expectations to the persistent myth that vulnerability is weakness.

For many, especially teenagers and young men, the idea of opening up to a therapist or even a friend feels insurmountable.

At the same time, something new is happening.

Teenage boys and young men are increasingly turning to personalised AI tools not only for therapy, but for companionship and even romantic advice.

A recent survey reported by The Guardian revealed this trend, signalling both the potential and complexity of AI-driven mental health support.

Here are three solutions we must prioritise to make AI a force for good in men’s mental health:

First, we need to position AI as a gateway to mental health care and support, not a destination.

Tech companies should design AI chatbots as entry points, encouraging men to seek professional help when needed.

This could be through embedding escalation protocols where the system recognises high-risk responses and connects users to human therapist.

This must be standard so that technology opens the door, not replace the therapist behind it.

We are already seeing good early examples of this, showing that this can be done:

The World Health Organization’s transdiagnostic chatbot for distressed adolescents in Jordan and other low-and-middle income countries was designed with human-centred principles, including escalation pathways to local mental health services when risk indicators are detected.

Similarly, Ghana’s emotion-aware chatbot encouraged 66 per cent of users to seek professional care after initial engagement, showing its role as a gateway.

Secondly, we need ethical design and data governance embedded into every system.

Transparency about how data is used is non-negotiable. Users must know who owns their information and how it is safeguarded.

Mental health cannot become another frontier for exploitation. Governments should enforce strong data protection laws, and tech companies must embed ethical frameworks from the start.

Lastly, we need to integrate AI with clinical pathways and human support.

AI tools should complement existing health systems, not operate in isolation.

Linking chatbots to telehealth services, crisis lines, and community support creates a safety net that technology alone cannot provide.

Healthcare leaders must ensure these integrations are practical and accessible.

Kudos go to early adopters like the National Suicide Prevention Lifeline in the USA which uses AI to analyse caller language and prioritise high-risk cases for immediate human intervention.

To be sure, AI can never replicate genuine empathy, and for young men who already feel isolated, relying on a machine may deepen detachment rather than foster real human connection.

They cannot truly understand pain or offer the empathy that comes from human connection.

There is also the danger of misinformation, missed warning signs, and the ever-present question of privacy.

For example, who owns the deeply personal information shared with a chatbot, and how is it protected?

No one wants to have the significant 2024 cybersecurity incident that Confidant Health, an AI-driven healthcare firm based in the United States, faced due to a misconfigured server and compromised an alarming 5.3 terabytes of confidential mental health data.

Nonetheless, I still believe AI chatbots can be an impactful complement to existing care options because they offer anonymity, immediacy, and a non-judgemental ear.

Since releasing my song and music video Mental Health SOS, I’ve spoken with teenagers and young men who echo what the evidence shows: men are significantly less likely to seek help for mental health issues, contributing to higher rates of untreated anxiety, depression, and suicide.

For many, the idea of opening up to a therapist or even a friend feels insurmountable.

For a young man who feels unable to speak to a human, a chatbot might be the first step. But it should never be the last.

If we pair innovation with ethics, and algorithms with human care, we can create a future where no man suffers in silence.

That’s a vision worth striving for not just on International Men’s Day, but every day.

About the author

Dr Serufusa Sekidde is an Ugandan pharmaceutical executive living in London.

As a Senior Fellow at the Aspen Institute (USA), he advances health equity and innovation across crypto and AI via thought leadership in outlets such as CNN, BBC, The Guardian etc.

He’s also a podcast host at Roots, Routes & Real Talk and an award-winning rapper.

Dr Sekidde won Senior Leader Award in the STEM Category at this year’s Black British Business Awards.

He’s @serufusa on social media

One in four UK patients turning to AI for health information
Better named a leader in the IDC MarketScape