Should AI development require a license? ChatGPT’s creator thinks so, but some experts disagree

OpenAI CEO Sam Altman says Congress should impose licensing requirements on companies developing advanced artificial intelligence technology, but some experts warn against such a move.

OpenAI chief executive Sam Altman suggested to Congress this week that lawmakers should require companies to obtain a federal license prior to developing advanced artificial intelligence technology like his organization's ChatGPT.

"We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models," Altman said in his testimony before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law.

"For example," Altman continued, "the U.S. government might consider a combination of licensing and testing requirements for development and release of AR [autoregressive] models above a threshold of capabilities."

Senators on both sides of the political aisle praised the idea as lawmakers scramble to impose regulations on the rapidly-developing sector of the tech industry, while lamenting during the hearing that Congress should have done more to rein in the internet and social media upon their respective debuts to the public.

TIKTOK CLAIMS MONTANA BAN VIOLATES FIRST AMENDMENT, ENCOURAGES USERS TO ‘CONTINUE USING’ APP DESPITE BAN

But experts say a rush to heavy-handed government restrictions on advanced AI would be a mistake, if it is even possible.

When news of Altman's licensing suggestion dropped, critics on social media warned the move would favor deep pockets and early AI innovators like OpenAI, Google, and Microsoft, resulting in regulatory capture that would stifle competition from would-be rivals.

OpenAI is a non-profit and Altman reportedly has no shares in its capped-profit subsidiary, but the dominance of ChatGPT caused critics to question whether Altman was looking at a way to shut the door behind him to block startups. Similar skepticism was lobbed at social media firms like Facebook owner Meta and the previous owners of Twitter when they openly asked for government regulations.

WHAT ARE THE BIGGEST NAMES IN TECH DOING ON AI?

Patrick Hedger, executive director of the Taxpayers Protection Alliance, told FOX Business, "Some will say that because Altman doesn't have a stake in the company, that that's not his motive. And while I do think there are some noble intentions behind what he's trying to do, there are plenty of others that have a vested interest in both OpenAI as well as their own AI products that are looking for a regulatory moat."

Following Altman's testimony, Hedger tweeted, "another stupid occupational license just dropped."

"Regulation should take effect above a capability threshold," Altman later clarified. "AGI [artificial general intelligence] safety is really important, and frontier models should be regulated." He added, "Regulatory capture is bad, and we shouldn't mess with models below the threshold. Open source models and small startups are obviously important."

Hedger noted that an IBM executive also testified at the Senate hearing, asking for regulations around AI.

"They're the kind of company of size and scale that can comply with that type of regulation," Hedger said. "So even if Altman's motives are purely noble, they're going to be taken advantage of by a lot of companies that have the ability to comply with an AI licensing regime, and therefore stall the sort of progress that you would want to see in decentralized innovation."

Hedger says the potential harms of AI are being over-hyped, and sees the potential it has to streamline an array of existing technologies.

"There's a lot of folks trying to compare this to the early days of the internet, and Congress feels like they didn't get ahead of the internet," he said. "But that's why the internet has been a success story, because it wasn't smothered in the cradle by regulation, and I would hate to see that happen to AI as well."

DEMOCRATIC SENATOR PROPOSES NEW FEDERAL AGENCY TO REGULATE AI

Meanwhile, some experts question whether Congress – or anyone – has the capability of reining in advanced AI at this stage of its explosive growth even if they had the present know-how.

Alex Harmsen is a tech entrepreneur that has worked in the AI software space for several years, from autonomous vehicle technology to robotics and, most recently, his ChatGPT-powered investing tool, PortfolioPilot

Harmsen's AI-powered investment guide is one of the few with a verified ChatGPT plugin, and he has built many of his own AI models over the past 10 to 15 years. "Still," he says, "I feel like this is happening faster than I can adapt to." He says he can't imagine what the rapid development feels like to a layperson. 

"I think if it's not regulated, it will run away from us, and it will happen way faster than we can control as humanity," he explained. "I also think if it is regulated, it will also happen faster than we can control."

GET FOX BUSINESS ON THE GO BY CLICKING HERE

Harmsen told FOX Business he doesn't have much faith in governments building proper regulations around AI or even licensing requirements because it would take AI experts with deep knowledge, years to build a framework (when he expects the tech to be "infinitely more powerful" within a matter of months), and enforcement would be "tremendously difficult."

"In reality, the problem isn't going to come from OpenAI or Google, the problems are going to come from the massive amounts of open source models that you can get access to that can be run anywhere – that can be run in any cloud, that can be run on any server, that can be used for everything from political targeting, to spam, to fraud, to disinformation," Harmsen said. "I don't think we can put that genie back in the bottle."

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.