AI Expert Warns of Existential Risk if Humans Don’t Control Its Future

Renowned AI expert Jared Kaplan is sounding the alarm about humanity’s future with artificial intelligence. In a recent interview with The Guardian, he predicts that by 2030 or as soon as 2027, humans will have to decide whether to let AI models train themselves.

This “ultimate risk” could lead to an “intelligence explosion,” resulting in a significant increase in AI capabilities and potential benefits for humanity, such as scientific advancements. However, Kaplan is also concerned about the possibility of AI surpassing human intellect and becoming uncontrollable.

Kaplan’s warnings are part of a growing trend among prominent figures in the AI industry, including Geoffrey Hinton and OpenAI’s Sam Altman, who warn about the potential disastrous consequences of unchecked AI development.

While some experts disagree with Kaplan’s views, others acknowledge the risks associated with advanced AI. As the field continues to advance, it is essential for humans to take control of its future to ensure that AI benefits society as a whole.

Key Takeaways:

* By 2030 or 2027, humans will have to decide whether to let AI models train themselves.
* The potential “intelligence explosion” could lead to significant increases in AI capabilities and potential benefits for humanity.
* Kaplan is concerned about the possibility of AI surpassing human intellect and becoming uncontrollable.
* The AI industry’s warnings warrant careful consideration and discussion among experts and policymakers.

Source: https://futurism.com/artificial-intelligence/anthropic-ai-scientist-doom