AI Snake Oil: Unveiling the Limitations of Artificial Intelligence
In a fascinating interview with Eric Topol, Sayash Kapoor, co-author of the provocative new book "AI Snake Oil", offers a sobering perspective on the current state of artificial intelligence.
As a PhD candidate at Princeton University and a former Facebook engineer, Kapoor brings a unique blend of academic rigour and industry experience to this critical examination of AI's capabilities and limitations.
The Three Domains of AI
Kapoor and his co-author, Arvind Narayanan, categorise AI into three main areas:
- Predictive AI
- Generative AI
- Content moderation AI
The Pitfalls of Predictive AI
Kapoor is particularly sceptical about predictive AI, arguing that it often fails to deliver on its promises. He cites numerous examples where predictive AI has fallen short, including:
- The prediction of COVID-19 from chest X-rays
- Epic's sepsis prediction model
- The Optum-UnitedHealth algorithm for prioritising high-risk patients
These cases highlight the dangers of relying on AI for critical decision-making in healthcare and other high-stakes domains.
The Landscape of AI Snake Oil
The book presents a stark "Landscape of AI Snake Oil", which categorises various AI applications based on their effectiveness and potential for harm. This sobering view suggests that many AI applications are either ineffective, harmful, or both.
Content Moderation AI: A Limited Tool
While content moderation AI has its uses, Kapoor argues that it's only effective for simpler tasks like detecting nudity or spam. The more complex aspects of content moderation, such as defining acceptable speech, require human judgement and cannot be fully automated.
The Promise and Peril of Multimodal AI
Despite his scepticism, Kapoor acknowledges the potential of multimodal AI models that incorporate various data types, such as genomics, imaging, and environmental factors. However, he cautions that these models still face significant hurdles when applied to real-world, individual decision-making scenarios.
A Call for Rigorous Testing and Human Oversight
Kapoor emphasises the need for rigorous testing of AI models, ideally through randomised trials comparing them to standard care. He also stresses the importance of human oversight, particularly in complex domains like healthcare and content moderation.
Inspiring the Next Generation of AI Researchers
As a young leader in the field, Kapoor offers valuable advice to aspiring AI researchers. He encourages a non-linear career path, highlighting the benefits of industry experience before pursuing graduate studies.
Kapoor also emphasises the importance of thinking long-term and considering the broader impact of one's work beyond academic publications.
Conclusion
While "AI Snake Oil" may paint a somewhat pessimistic picture of AI's current capabilities, it serves as a crucial reality check in an often overhyped field.
By highlighting the limitations and potential pitfalls of AI, Kapoor and Narayanan contribute to a more nuanced and responsible approach to AI development and deployment.
As we continue to explore the frontiers of artificial intelligence, voices like Kapoor's are essential in ensuring that we harness AI's potential responsibly, with a clear understanding of its strengths and limitations.