NIST - National Institute of Standards and Technology
Creator: Conny Svensson
- Conducting fundamental research to advance trustworthy AI technologies and understand and measure their capabilities and limitations.
- Applying AI research and innovation across NIST laboratory programs.
- Establishing benchmarks and developing data and metrics to evaluate AI technologies.
- Leading and participating in the development of technical AI standards.
- Contributing to discussions and development of AI policies, including supporting the National AI Advisory Committee.
- Hosting the NIST Trustworthy & Responsible AI Resource Center providing access to a wide range of relevant AI resources.
Consortium includes more than 200 leading AI stakeholders and will support the U.S. AI Safety Institute at NIST.
Today, U.S. Secretary of Commerce Gina Raimondo announced the creation of the U.S. AI Safety InstituteConsortium (AISIC), which will unite AI creators and users, academics, government and industry researchers, and civil society organizations in support of the development and deployment of safe and trustworthy artificial intelligence (AI). The consortium will be housed under the U.S. AI Safety Institute (USAISI) and will contribute to priority actions outlined in President Biden’s landmark Executive Order, including developing guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content. |
The President’s Executive Order (EO) on Safe, Secure, and Trustworthy Artificial Intelligence (14110) issued on October 30, 2023, charges multiple agencies – including NIST – with producing guidelines and taking other actions to advance the safe, secure, and trustworthy development and use of Artificial Intelligence (AI). The EO directs NIST is to:
Most of the EO tasks to NIST have a 270 day deadline. NIST will seek public comment on draft documents produced under the EO. For EO-related questions email: ai-inquiries@nist.gov. |