Charles Bouveyron, director of the Institut 3iA Côte d’Azur and member of the SophIA Summit 2021 organising committee writes for Health Tech World about the role of Artificial Intelligence (AI) in society
Over the last few years, AI has entered many areas of our lives, both professional and personal.
The technology is already everywhere, and will be increasingly so as its potential and fields of application appear endless. From image recognition to decision support, personalised medicine to industrial maintenance, transport to finance, and cyber security to the prevention of natural disasters… AI raises the bar for efficiency and performance.
Faced with this revolution, and with the technology progressing at such speed, AI can arouse contradictory views. Veering between enthusiasm and occasionally mistrust, the general public has its reservations. Collaboration between all stakeholders, including governments, scientists, businesses and users, is essential to ensure responsible and ethical AI and ease any concerns.
From research to public debate
People are not always comfortable with AI expanding so quickly. Some see it as an intrusion, or even a threat. These concerns are totally understandable, particularly in light of some of the more questionable uses of AI – such as the social credit system designed by the Chinese government, where organisations or individuals are tracked and evaluated for trustworthiness.
But it is important to highlight that these extreme cases are marginal. For the most part, AI technologies enable significant progress to be made, whether that’s by breaking glass ceilings in terms of performance, or through saving valuable time in countless areas.
Whatever our thoughts on AI, by entering our lives, it has stopped being uniquely a subject of research and has become a topic of public debate.
Knowing when to share our data
Without data, artificial intelligence cannot learn and is not viable. But that data often belongs to citizens, hence it’s rightly debated. This is our private life. AI can only be utilised if we consent to how our personal data is used, even if it’s anonymised. This makes it a type of contract, which can only work if it is based on moral principal.
Broadly speaking, this principal relies on whether the use of our data is beneficial and useful to us (especially collectively). It also relies on operators, businesses, public institutions or governments knowing how to protect our data – particularly highly sensitive information such as medical history.
Hence it’s vital that citizens are well informed and that they have an understanding of the AI tools and applications so that they can make the right decisions and be vigilant with regards to bad practices.
This awareness and knowledge will enable the sharing of data to be done with confidence. It can be achieved by educating all audiences on the basic concepts of AI. Many educational players are already offering training to familiarise people with how the technology can be applied.
To realise the potential of AI , the various players – academic research, industrial research, commercial enterprises and governments – must adopt a collaborative approach. Of course, the artificial intelligence market is subject to competition, like any other. There is however real cooperation on many levels. This is most evident within the global scientific community, but it’s also happening in the private sector: even GAFAM (Google, Apple, Facebook, Amazon, and Microsoft) publish a significant proportion of their scientific data.
This trend is also growing within the wider business community. Many organisations have increased their R&D investments over recent years and are working with the academic world to establish research collaborations on theoretical or applied subjects.
Finally, the state plays a critical role in aligning ambitions for AI with responsibility. The French government, for example has created the “AI for Humanity” programme, providing public funding for research into artificial intelligence. It also supports the foundation of a network of dedicated public institutions – the 3IA institutes – which promote essential cooperation and support an ethical approach.
It’s vital that all parties commit to excellence, responsibility and ethics to secure the future use of AI. By engaging collectively, stakeholders can enable AI to become the best it can be, while reassuring users that they can use it and benefit from it with absolute confidence.
Charles Bouveyron will participate in the Soph.I.A Summit, which will take place until tomorrow at Sophia Antipolis technology park on the French Riviera and online.