Recently, the World Health Organization (WHO) released guidance on the ethical use and governance of Large Multi-Modal Models (LMM) in healthcare, acknowledging the significant impact of Generative Artificial Intelligence (AI) technologies like ChatGPT, Bard, and Bert.
LMMs are models that leverage multiple senses to simulate human-like perception. This enables AI to respond to a broader range of human communication, making interactions more natural and intuitive. LMMs combine various data types, such as images, text, language, audio, and other heterogeneous data, allowing these models to understand images, videos, and audio while engaging in conversation. Some examples of multimodal LLMs include GPT-4V, MedPalm M, Dall-E, Stable Diffusion, and Midjourney.
The new WHO guidance outlines five main applications of LMMs in healthcare:
LMMs have seen unprecedented adoption, surpassing any previous consumer technology in speed. Although LMMs can mimic human communication and perform tasks without explicit programming, this rapid growth underscores the need to balance their benefits with potential risks.
Despite their promising applications, LMMs carry risks, including generating false, inaccurate, or biased statements that could mislead health decisions. The data used to train these models may suffer from quality or bias issues, potentially perpetuating disparities based on race, ethnicity, sex, gender identity, or age.
Broader concerns include the accessibility and affordability of LMMs, and the risk of Automation Bias—the tendency to overly rely on automated systems—leading healthcare professionals and patients to overlook errors.
Cybersecurity is a crucial concern due to the sensitivity of patient information and the importance of trusting these algorithms.
The WHO recommends a collaborative approach involving governments, technology companies, healthcare providers, patients, and civil society in all stages of LMM development and deployment. It emphasizes the need for global cooperative leadership to effectively regulate AI technologies. The guidance provides a roadmap for leveraging the power of LMMs in healthcare while addressing their complexities and ethical considerations. In May 2023, the WHO emphasized the importance of applying ethical principles and appropriate governance—as outlined in its guidance on the ethics and governance of AI for health—when designing, developing, and deploying AI for health.
The six core principles identified by the WHO are:
NITI Aayog has issued guiding documents on AI issues, such as the National Strategy for Artificial Intelligence and the Responsible AI for All report. The focus is on social and economic inclusion, innovation, and trustworthiness.
The UK has adopted a light-touch approach, asking regulators in various sectors to apply existing regulations to AI. It published a white paper outlining five principles companies should adhere to: safety, security, and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress.
The US released a Blueprint for an AI Bill of Rights (AIBoR), outlining the potential harms of AI to economic and civil rights and setting forth five principles for mitigating these harms. Instead of a horizontal approach like the EU, the blueprint advocates for a sector-specific approach to AI governance, with policy interventions for individual sectors, such as health, labor, and education.
In 2022, China implemented some of the world's first nationally binding regulations targeting specific types of algorithms and AI. It enacted a law to regulate recommendation algorithms, with a focus on how they disseminate information.
39 videos|4265 docs|898 tests
|
1. What are Large Multi-Modal Models (LMM)? |
2. What are the WHO’s Guidelines Regarding the Use of LMMs in Healthcare? |
3. What Concerns has WHO Raised about LMMs in Healthcare? |
4. What are the Key Recommendations of WHO Regarding LMMs? |
5. How is Global AI Currently Governed? |
39 videos|4265 docs|898 tests
|
|
Explore Courses for UPSC exam
|