NIST's Software Un-Standards
Published by The Lawfare Institute
in Cooperation With
The National Institute of Standards and Technology (NIST) has become a beacon of hope for those who trust in federal standards for software and AI safety. Moreover, lawmakers and commentators have indicated that compliance with NIST standards ought to shield entities from liability. With more than a century of expertise in scientific research and standard-setting, NIST would seem to be uniquely qualified to develop such standards.
But as I argue in this paper, this faith is misplaced. NIST’s latest forays in risk management frameworks disavow concrete metrics or outcomes, and solicit voluntary participation instead of providing stable mandates. That open-ended approach can be attributed to the reversal of NIST’s prior efforts to promulgate federal software standards during the 1970s and 1980s. The failure of those federal regulatory efforts highlights fundamental challenges inherent in software development that continue to persist today.
Policymakers should draw upon the lessons of NIST’s experience and recognize that federal standards are unlikely to be the silver bullet. Instead, they should heed NIST’s admonition that the practice of software development remains deeply fragmented for other intrinsic reasons. Any effort to establish a universal standard of care must grapple with the need to accommodate the broad heterogeneity of accepted practices in the field.
You can read the paper here or below: