Is AI regulation having a moment?
The Federal Trade Commission opened an investigation this week into whether OpenAI, maker of ChatGPT, has violated consumer protection laws. The agency is looking into the company’s security practices as well as whether its products harmed people by making false statements about them. Regulatory bodies elsewhere are watching artificial intelligence, too, including in China, where new rules on the technology are expected to go into effect next month. We know AI is having a moment, but what about AI regulation?
Companies are competing for their AIs to be fast and smart.
“Whether they’re safe is also a value, but it’s not the overriding one,” said Dan Hendrycks, who directs the Center for AI Safety.
He also advises Elon Musk’s brand new company, xAI.
In his opinion, more laws are needed to force companies to slow down.
“If an AI developer’s AI causes substantial harm, then the AI developer is responsible,” he said.
Many AI bills have been introduced in Congress, and federal agencies are enforcing existing laws around how the tech is used.
Meanwhile, Samir Jain at the Center for Democracy and Technology said he expects countries to collaborate on compatible global standards.
“We do need to get on it,” he said. “AI is already being used to mediate access to economic opportunities like credit, housing, and employment.”
There’s that saying: Safety rules are written in blood. We don’t want that, and we don’t want them to be written by a computer either.
There’s a lot happening in the world. Through it all, Marketplace is here for you.
You rely on Marketplace to break down the world’s events and tell you how it affects you in a fact-based, approachable way. We rely on your financial support to keep making that possible.
Your donation today powers the independent journalism that you rely on. For just $5/month, you can help sustain Marketplace so we can keep reporting on the things that matter to you.