August 6, 2020 | AI Kickoff WebinarΒ
This webinar kicks off a ΊωΒ«Νή²ΚΖ± initiative involving private and public sector organizations and individuals in discussions about building blocks for trustworthy AI systems and the associated measurements, methods, standards, and tools to implement those building blocks when developing, using, and overseeing AI systems. ΊωΒ«Νή²ΚΖ±βs effort will be informed by a series of workshops that will follow this initial session.
August 18, 2020 | Bias in AI Workshop
This workshop focuses on collectively facilitating the development of a shared understanding of bias in AI, what it is, and how to measure it. This online event will consist of collaborative panels and breakout sessions and will bring together experts from the public and private sectors to engage in important discussions about bias in AI.
Artificial Intelligence (AI) is rapidly transforming our world. Remarkable surges in AI capabilities have led to a number of innovations including autonomous vehicles and connected Internet of Things devices in our homes. AI is even contributing to the development of a brain-controlled robotic arm that can help a paralyzed person feel again through complex direct human-brain interfaces. These new AI-enabled systems are revolutionizing everything from commerce and healthcare to transportation and cybersecurity.
AI has the potential to impact nearly all aspects of our society, including our economy, but the development and use of the new technologies it brings are not without technical challenges and risks. AI must be developed in a trustworthy manner to ensure reliability, safety and accuracy.
ΊωΒ«Νή²ΚΖ± has a long-standing reputation for cultivating trust in technology by participating in the development of standards and metrics that strengthen measurement science and make technology more secure, usable, interoperable and reliable. This work is critical in the AI space to ensure public trust of rapidly evolving technologies, so that we can benefit from all that this field has to promise.Β
AI systems typically make decisions based on data-driven models created by machine learning, or the systemβs ability to detect and derive patterns. As the technology advances, we will need to develop rigorous scientific testing that ensures secure, trustworthy and safe AI. We also need to develop a broad spectrum of standards for AI data, performance, interoperability, usability, security and privacy.
ΊωΒ«Νή²ΚΖ± participates in interagency efforts to further innovation in AI. ΊωΒ«Νή²ΚΖ± Director and Undersecretary of Commerce for Standards and Technology Walter Copan serves on the White House Select Committee on Artificial Intelligence. Charles Romine, Director of ΊωΒ«Νή²ΚΖ±βs Information Technology Laboratory, serves on the Machine Learning and AI Subcommittee.Β
A February 11, 2019,Β tasks ΊωΒ«Νή²ΚΖ± with developing βa plan for Federal engagement in the development of technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies.β For more information, see: /topics/artificial-intelligence/ai-standards.
ΊωΒ«Νή²ΚΖ± research in AI is focused on how to measure and enhance the security and trustworthiness of AI systems. This includes participation in the development of that ensure innovation, public trust and confidence in systems that use AI technologies. In addition, ΊωΒ«Νή²ΚΖ± is applying AI to measurement problems to gain deeper insight into the research itself as well as to better understand AIβs capabilities and limitations.Β
The ΊωΒ«Νή²ΚΖ± AI program has two major goals:Β
The recently launched AI Visiting FellowΒ program brings nationally recognized leaders in AI and machine learning to ΊωΒ«Νή²ΚΖ± to share their knowledge and experience and to provide technical support.