The development of AI continues to advance at a blistering pace, increasing the need for companies to employ AI governance and adopt policies for the responsible development and deployment of AI. While the term “responsible AI” is frequently used, it is rarely understood and often complex. Fortunately, a growing body of resources are becoming available to help companies understand and implement responsible AI. Two of the more recent resources are a set of publications by NIST (the National Institute of Standards and Technology) and Microsoft. These publications provide examples of efforts by these institutions to develop best practices for responsible AI development.Continue Reading Responsible AI – Everyone is Talking About it But What Is It?
National Institute of Standards and Technology
NIST Updates AI RMF as Mandated by the White House Executive Order on AI
We have now reached the 180-day mark since the White House Executive Order (EO) on the Safe, Secure and Trustworthy Development of AI and we are seeing a flurry of mandated actions being completed. See here for a summary of recent actions. One of the mandated actions was for the National Institute of Standards and Technology (NIST) to update its January 2023 AI Risk Management Framework (AI RMF 1.0), which it has now done. To this end, NIST released four draft publications intended to help improve the safety, security and trustworthiness of artificial intelligence (AI) systems and launched a challenge series to support development of methods to distinguish between content produced by humans and content produced by AI.Continue Reading NIST Updates AI RMF as Mandated by the White House Executive Order on AI