As artificial intelligence (AI) agents become increasingly sophisticated, it’s becoming harder to distinguish between human users and AI-powered ones online. To address this issue, researchers from MIT, OpenAI, Microsoft, and other tech companies and academic institutions have proposed the use of personhood credentials – a verification technique that enables someone to prove they are a real human online while preserving their privacy.
Personhood credentials would allow individuals to demonstrate their humanity without revealing sensitive information about themselves. To obtain such a credential, one would need to provide proof of identity through an offline process, such as visiting a government agency or showing a tax ID number. This ensures that only humans can obtain personhood credentials, and even the most advanced AI systems cannot.
The benefits of using personhood credentials include filtering out AI-generated content, moderating social media feeds, and determining the trustworthiness of online information. However, there are also risks associated with implementing such a system, including concentration of power among issuers and potential limitations on free expression in certain sociopolitical environments.
To mitigate these risks, the researchers suggest that personhood credentials should be implemented in a way that ensures multiple issuers and an open protocol to maintain freedom of expression. The paper encourages governments, policymakers, leaders, and researchers to invest more resources in developing and implementing personhood credentials, exploring different implementation directions, and creating policies for their use.
As AI capabilities continue to advance, it’s crucial for governments and big companies to adapt their digital systems to be ready to prove someone’s humanity while preserving privacy and safety.
Source: https://news.mit.edu/2024/3-questions-proving-humanity-online-0816