The World Health Organization (WHO) is waving a red flag about introducing artificial intelligence (AI) health technologies in poorer countries. It is concerned that if tech giants and rich countries control everything, it might cause problems.
At a recent media briefing, Alain Labrique from WHO shared concerns, saying that they don’t want to see technology making existing inequalities worse. The WHO just released guidelines on large multi-modal models (LMMs), a type of AI like the one in ChatGPT. These computer models read words, watch videos and look at pictures. These have become popular in healthcare.
If these computer models don’t learn from info in poorer areas, it might not be helpful there. Jeremy Farrar from WHO explained that AI can help, but only if we are careful about the risks.
WHO’s guidelines insist that it is not only up to tech companies to control these powerful AI tools. It is telling countries to team up and make rules for how AI is made and used. It also wants companies and community groups to have a say in how these tech things are used.
The guidelines also mention something called “industrial capture.” It is when big companies might take control of making AI, leaving universities and governments out of the picture. To stop this, WHO suggests that after these AI tools are released, independent groups should check to make sure they are doing what they are supposed to do and not causing harm.
The guidelines suggest that the companies making AI tools should learn about ethics, kinda like how doctors do. Governments are encouraged to make sure these AI programs are registered early on to keep things open and prevent false information from spreading.