NIST’s Chief AI Advisor Elham Tabassi Discusses the U.S. Government’s AI Efforts at SCSP AI Expo

Elham Tabassi (right) speaks during a keynote interview during the SCSP’s AI Expo for National Competitiveness, moderated by Washington AI Network founder Tammy Haddad

During the SCSP’s AI Expo for National Competitiveness on May 7 in Washington, D.C., Tammy Haddad, founder of the Washington AI Network, interviewed Elham Tabassi, the chief AI advisor at the National Institute of Standards and Technology (NIST) on the U.S. government’s AI efforts. 

Tabassi highlighted NIST’s recently unveiled AI Risk Management Framework (AIRMF), as well as NIST’s goals to achieve and promote the safe and responsible development and use of AI technologies. 

“Knowing that we need to iterate, we are not going to find the perfect answer. So we get some answers, we learn something, and let’s build on that and do more,” said Tabassi, emphasizing that AI research for risk management is about more than just computations and algorithms. She also hears from psychologists, sociologists, and cognitive scientists when conducting research, rather than exclusively conversing with computer scientists and software engineers about the risks of AI. 

Tabassi underscored a common misconception: that risk and safety is an inherent contradiction to innovation. The goal of the NIST is not to hinder AI innovation, but rather to create comprehensive means of risk management for a variety of issues that become prevalent as AI advances. Tabassi noted, “…We want technology that’s easy to do the right thing, difficult to do the wrong thing, and easy to recover if somebody inadvertently did the wrong thing.” Tabassi explained that the solutions sought by NIST can be somewhat elusive, noting the increasingly brief time it takes for AI technologies and systems to evolve.

Elham Tabassi (right) with bestselling author, journalist, and podcast host Kara Swisher, during the Washington AI Network’s “Ask Kara Anything” event on March 2, 2024, at the House at 1229.

There are ways, however, that Elham Tabassi and her associates at NIST can evaluate AI technology, in order to highlight issues and capabilities of different systems, such as the AI Challenge Problems. These challenge problems are open to essentially anyone within the artificial intelligence field, and the goal is for the engineers and scientists to come up with solutions that the challenge problems may highlight. 

“And that’s really our job,” said Tabassi, “Kind of convene the community around problems and challenges that relate to all of them and try to advance them.” 

On the national security front, when asked about China or other outside actors being a threat through participation in these evaluations, Tabassi emphasized her belief in open science. She explained that, from a national security perspective, it is better to know other nations’ capabilities in AI technology rather than to be left guessing. Further, she explained that if foreign nationals attempt to participate in NIST’s public-facing offerings, those attempts will be flagged and filtered through additional vetting processes, explaining, “And if there are foreign nationals, they go through an extra vetting process that’s done by experts at NIST or outside of the NIST to make sure that national security considerations are all preserved.” 

The AI Expo interview concluded with Tabassi speaking about her goal for the future of AI safety, as well as her belief that the risk management of artificial intelligence should be a group effort with people working in conjunction with one another to ensure a more prosperous communal technology. 

Discover more from Washington AI Network

Subscribe now to keep reading and get access to the full archive.

Continue reading