AI Risk Management Framework

The National Institute of Standards and Technology (NIST) has developed this non-sector-specific, use-case agnostic, and practical framework to help individuals, organisations, and societies manage risks related to artificial intelligence (AI). Intended for voluntary use, the AI Risk Management Framework (RMF) was created to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems. This framework is centered around four core functions that organise AI risk management activities at their highest level, enabling you to better govern, map, measure, and manage these risks.
The NIST has developed a companion resource, the AI RMF Playbook, which provides suggested actions for achieving the outcomes laid out in the AI RMF. The suggestions are aligned to each of the aforementioned sub-categories. The NIST has also launched the Trustworthy and Responsible AI Resource Center, which was developed to support and operationalise the AI RMF and the accompanying playbook.
Share this Resource on:LinkedIn