A recent study by researcher Roman Yampolskiy indicates a 99.999999% probability that artificial intelligence (AI) will end humanity. However, OpenAI CEO Sam Altman disputes this claim, stating that AI will be smart enough to prevent itself from causing existential doom.
Altman believes that researchers are on the right track to figure out how to prevent AI from destroying humanity and thinks that the biggest problem is not the technology itself but rather the societal and regulatory frameworks surrounding its development. He hopes that by the time superintelligence is achieved, safety concerns will be mitigated with “surprisingly little” societal impact.
Despite Altman’s optimism, experts and investors are growing increasingly concerned about the rapid advancement of generative AI, which poses significant risks to security, privacy, and existential threats. Top tech companies, including Microsoft and Google, are heavily invested in this area, but there is a lack of policies to govern its development, raising concerns about control and potential catastrophic consequences.
Yampolskiy’s study suggests that the only way to prevent AI from becoming a threat is not to build it at all. OpenAI’s recent funding round has brought the company back from the brink of bankruptcy, but investors are now demanding transformation into a for-profit venture within two years or risk refunding their money.
The situation highlights the need for more transparency and accountability in AI development, particularly when it comes to existential risks. As AI continues to advance rapidly, it’s essential to consider the potential consequences and work towards creating safer and more responsible technologies that prioritize human well-being.
Source: https://www.windowscentral.com/software-apps/sam-altman-ai-smart-enough-to-prevent-existential-doom