Opportunity at National Institute of Standards and Technology (NIST)
Theoretical Framework for Artificial Intelligence Assurance
Information Technology Laboratory, Information Access Division
Please note: This Agency only participates in the February and August reviews.
|Garris, Michael D.
Artificial intelligence (AI), enabled by machine learning and embedded in autonomous systems, must be developed and deployed with assurance to operate accurately, reliably, safely, and without bias. There is a critical need to advance theory and methodologies for providing assurance of AI technologies. We are particularly interested in leveraging inter-disciplinary approaches with complementary capabilities and strengths combining, among other things, intrinsic improvements to AI algorithms to provide explanation and support for decisions, scalable formal models and mathematical analysis for AI reasoning, and system safety engineering to support embedded AI. This research will help drive a rigorous scientific system-level testing capability at NIST essential to demonstrate emerging AI technology readiness and foster public trust.
Artificial intelligence; Assurance; Inter-disciplinary; Algorithm; Formal models; Safety engineering; Reliable;
Open to U.S. citizens
Open to Postdoctoral applicants