A senior U.S. official stated on Monday that safety precautions should be integrated into systems from the beginning rather than being added later due to the possible threat posed by artificial intelligence’s (AI) quick development.
“We’ve normalized a world where technology products come off the line full of vulnerabilities and then consumers are expected to patch those vulnerabilities. We can’t live in that world with AI,” said Jen Easterly, director of the U.S. Cybersecurity and Infrastructure Security Agency.
“It is too powerful, it is moving too fast,” she said in a telephone interview after holding talks in Ottawa with Sami Khoury, head of Canada’s Centre for Cyber Security.
On the same day that Easterly spoke, agencies from eighteen nations, including the US, approved new British-developed AI cyber security rules that prioritise secure design, development, deployment, and maintenance.
“We have to look at security throughout the lifecycle of that AI capability,” Khoury said.
Leading AI developers decided earlier this month to collaborate with governments in order to test new frontier models prior to their release, in an effort to mitigate the hazards associated with this quickly evolving technology.
“I think we have done as much as we possibly could do at this point in time, to help come together with nations around the world, with technology companies, to set out from a technical perspective how to build these build these capabilities as securely and safely as possible,” said Easterly.
(Adapted from Reuters.com)
Categories: Economy & Finance, Regulations & Legal, Strategy, Uncategorized
Leave a comment