US standards and tech group seeks public input on AI safety, development guidelines

NIST, under the U.S. Commerce Department, seeks public input on AI safety, inspired by President Biden’s executive order.
NIST, under the U.S. Commerce Department, seeks public input on AI safety, inspired by President Biden’s executive order.

The United States National Institute of Standards and Technology (NIST) within the U.S. Department of Commerce has released a request for information to support its duties outlined in the latest presidential executive order concerning the secure and responsible development and use of artificial intelligence (AI).

The organization announced it invites public input until Feb. 2, 2024, to gather essential feedback for conducting tests to ensure the safety of AI systems.

U.S. Secretary of Commerce Gina Raimondo stated that the initiative is inspired by President Joe Biden’s October executive order instructing NIST to create guidelines, including evaluation and red-teaming, fostering consensus-based standards, and establishing testing environments for AI system assessment. This framework aims to support the AI community in safely, reliably and responsibly developing AI.

The NIST’s request for information seeks input from AI companies and the public on generative AI risk management and reducing risks of AI-generated misinformation.

Generative AI, which is capable of generating text, photos and videos based on open-ended prompts, has generated enthusiasm and concerns. There are worries about job displacement, electoral disruptions and the potential for the technology to surpass human capabilities with possibly catastrophic consequences.

The NIST request also seeks details on determining the most effective areas for “red-teaming” in AI risk assessment and establishing best practices. Red-teaming is a practice from Cold War simulations that refers to a technique where a group of individuals, known as the red team, simulates potential adversarial scenarios or attacks to assess the vulnerabilities and weaknesses of a system, process, or organization. The method has long been employed in cybersecurity to uncover new risks.

Related: UK AI Safety Summit: Musk likens AI to ‘magic genie,’ says no jobs needed in future

In August, the inaugural U.S. public evaluation red-teaming event occurred at a cybersecurity conference coordinated by AI Village, SeedAI and Humane Intelligence.

In November, NIST announced the formation of the new AI consortium along with an official notice expressing the office’s request for applicants with the relevant credentials. The consortium aims to create and implement specific policies and measurements to ensure U.S. lawmakers take a human-centered approach to AI safety and governance.

Magazine: ‘AI has killed the industry’: EasyTranslate boss on adapting to change