The Department of Commerce’s National Telecommunications and Information Administration (NTIA) has released an Artificial Intelligence (AI) Accountability Policy Report offering policy recommendations to help support safe, secure, and trustworthy AI innovation.
The report was compiled with consideration of more than 1,400 comments from stakeholders responding to NTIA’s AI Accountability Policy Request for Comment. It urges the government to put forth guidance, support, and regulations for AI systems, calling for improved transparency into AI systems; independent evaluations to verify the claims made about these systems; and consequences for imposing unacceptable risks or making unfounded claims.
“Responsible AI innovation will bring enormous benefits, but we need accountability to unleash the full potential of AI,” said Assistant Secretary of Commerce for Communications and Information, and NTIA Administrator Alan Davidson. “NTIA’s AI Accountability Policy recommendations will empower businesses, regulators, and the public to hold AI developers and deployers accountable for AI risks, while allowing society to harness the benefits that AI tools offer.”
The report delves into the role of standards, noting that international technical standards are vitally important and may be necessary for defining the methodology for certain kinds of audits. Under-developed standards “mean uncertainty for companies seeking compliance, diminished usefulness of audits, and reduced assurance for customers, government, and the public.” The wide range of sectors using AI terminology—many with their own applications, risks, and terminology—presents challenges in AI standards development. (Lack of sector-specific terminology and vocabulary is frequently mentioned as a challenge in AI standardization, as noted at ANSI’s recent listening session on standardization in the healthcare and financial services sectors.)
Per NTIA, commenters on the policy noted the need for standards and benchmarks in areas including:
· AI risk hierarchies, acceptable risks, and tradeoffs;
· performance of AI models, including for fairness, accuracy, robustness, reproducibility, and explainability;
· data quality, provenance, and governance;
· internal governance controls, including team compositions and reporting structures;
· stakeholder participation;
· security;
· internal documentation and external transparency; and
· testing, monitoring, and risk management.
The report notes the need for accelerated international standards work and expanded participation in technical standards and standards-setting processes. It advises that the government can foster the utility of standards for accountability purposes by:
1. encouraging and fostering participation by diverse stakeholders, including civil society, non-industry participants, and those involuntarily affected by AI systems;
2. helping improve and expand access to standards publications by those traditionally under-represented parties;
3. supporting methods to align industry standards with societal values; and
4. in appropriate circumstances, developing guidelines or other resources that contribute toward standards development.