Regulators should consider 3 factors for AI safety, former national cyber director says

National Cyber Director Chris Inglis speaks at the Council of Foreign Relations on April 20, 2022 in Washington, DC. Inglis told the National Artificial Intelligence Advisory Committee March 5, 2024 that regulators should take a holistic approach to the technology.

National Cyber Director Chris Inglis speaks at the Council of Foreign Relations on April 20, 2022 in Washington, DC. Inglis told the National Artificial Intelligence Advisory Committee March 5, 2024 that regulators should take a holistic approach to the technology. Drew Angerer/Getty Images

During a meeting focused on safety considerations of the emerging technology, former National Cyber Director Chris Inglis advocated for a holistic approach to artificial intelligence.

Artificial Intelligence and machine learning systems should be treated holistically, focusing on both the technology and societal components, according to former National Cyber Director Chris Inglis, who offered his perspective on the future policy direction for AI technologies during a National Artificial Intelligence Advisory Committee meeting held on Tuesday. 

“Any system of interest is composed of technology, people and doctrine,” he said. “And the system's performance depends on all three of these; no one of those factors — technology, people or doctrine — can wholly account for deficiency in another one of those.”

Within those three factors, Inglis said that regulators should first ensure the guiding philosophy behind an AI tool’s purpose is clarified, and then ensure a technically capable workforce is in place to manage an AI software’s implementation and usage to align with the original “doctrine” the tool set out to achieve.

“This approach to managing technology particularly applies to implementing AI systems within existing digital ecosystems,” he said. “What's important in the technology component of that system is not simply the technology itself, but the data that feeds the system. And the relationship that the system has with the human components of that system.”

Inglis’s remarks echo the Biden administration’s goal of pursuing human-centered design and governance for AI and emerging technologies. This approach impacts the design of the software to ensure that human users are considered throughout that development process. The administration has also pushed for the cultivation of a federal workforce equipped to technically manage and regulate AI.

“We need to actually make sure we focus on the human component of the system, that the human is prepared to serve the role that they will play within the system,” he said. “No amount of intelligence built into the system — presumed intelligence built into the system — can account for a lack of a critical thinker in the form of the human being.”