AI-related risks have been identified and categorized through substantial research, but a unified framework is still needed to ensure consistency in terminology and clarity. The lack of standardization hinders the ability to integrate research, assess threats, and establish a cohesive understanding necessary for robust AI governance and regulation.
To address this challenge, researchers from MIT and the University of Queensland have developed an AI Risk Repository that compiles 777 risks from 43 taxonomies into an accessible, adaptable, and updatable online database. The repository is organized into two taxonomies: a high-level Causal Taxonomy that classifies risks by their causes and a mid-level Domain Taxonomy that categorizes risks into seven main domains and 23 subdomains.
This comprehensive framework offers a structured foundation for understanding and mitigating AI risks, allowing policymakers, auditors, academics, and industry professionals to filter and analyze specific AI risks. The repository is designed to support ongoing research and debate, providing valuable insights for policymakers, auditors, and researchers.
The study conducted a comprehensive review of 17,288 articles, selecting 43 relevant documents focused on AI risks. The findings highlight diverse definitions and frameworks for AI risks, emphasizing the need for more standardized approaches. Two taxonomies – Causal and Domain – were used to categorize the identified risks, highlighting issues related to AI system safety, socioeconomic impacts, and ethical concerns like privacy and discrimination.
The AI Risk Repository is a valuable resource for anyone working in the field of AI, providing a unified framework for understanding and addressing AI-related risks.
Source: https://www.marktechpost.com/2024/08/17/mit-researchers-released-a-robust-ai-governance-tool-to-define-audit-and-manage-ai-risks/