
A new peer-reviewed paper has highlighted the need for a clear framework when it comes to AI research, given the rapid adoption of artificial intelligence by children and adolescents using digital devices to access the internet and social media.
The recommendations are based on a critical appraisal of current shortcomings in the research on how digital technologies’ impact young people’s mental health, and an in-depth analysis of the challenges underlying those shortcomings.
The paper calls for a “critical re-evaluation” of how we study the impact of internet-based technologies on young people’s mental health, and outlines where future AI research can learn from several pitfalls of social media research.
Existing limitations include inconsistent findings and a lack of longitudinal, causal studies.
Dr Karen Mansfield, postdoctoral researcher at the Oxford Internet Institite (OII) and lead author of the paper.
She said: “Research on the effects of AI, as well as evidence for policymakers and advice for caregivers, must learn from the issues that have faced social media research.
“Young people are already adopting new ways of interacting with AI, and without a solid framework for collaboration between stakeholders, evidence-based policy on AI will lag behind, as it did for social media.”
The paper describes how the impact of social media is often interpreted as one isolated causal factor, which neglects different types of social media use, as well as contextual factors that influence both technology use and mental health.
Without rethinking this approach, future research on AI risks getting caught up in a new media panic, as it did for social media.
Other challenges include measures of social media use that are quickly outdated, and data that frequently excludes the most vulnerable young people.
The authors propose that effective research on AI will ask questions that don’t implicitly problematise AI, ensure causal designs, and prioritise the most relevant exposures and outcomes.
The paper concludes that as young people adopt new ways of interacting with AI, research and evidence-based policy will struggle to keep up.
However, by ensuring our approach to investigating the impact of AI on young people reflects the learnings of past research’s shortcomings, we can more effectively regulate the integration of AI into online platforms, and how they are used.
Professor Andrew Przybylski is OII Professor of Human Behaviour and Technology and contributing author to the paper.
He said: “We are calling for a collaborative evidence-based framework that will hold big tech firms accountable in a proactive, incremental, and informative way.
“Without building on past lessons, in ten years we could be back to square one, viewing the place of AI in much the same way we feel helpless about social media and smartphones.
“We have to take active steps now so that AI can be safe and beneficial for children and adolescents.”




