AI Safety Concerns Rise Amid Uncontrolled Systems Development

Two prominent AI scientists, Yoshua Bengio and Max Tegmark, have warned of the dangers of uncontrollable artificial general intelligence (AGI). Their concerns stem from the industry’s focus on developing AGI systems that can act like “agents,” which could lead to a loss of control over these systems.

Bengio notes that researchers are inspired by human intelligence and aim to create AGI systems that understand the world and achieve goals, but this approach poses significant risks. He fears that pursuing this path would be similar to creating a new intelligent entity on Earth, with uncertain behavior.

Tegmark suggests an alternative approach, focusing on “tool AI” systems designed for specific, narrow purposes without agency. These systems can offer benefits while minimizing risks. To achieve safety standards, companies must demonstrate control over their AI systems before selling them.

The Future of Life Institute, led by Tegmark, has called for a pause in AGI development since 2023. However, now that the topic is gaining attention, it’s crucial to establish guardrails to prevent uncontrollable AI systems from being developed.

Source: https://www.cnbc.com/2025/02/07/dangerous-proposition-top-scientists-warn-of-out-of-control-ai.html