top of page
Writer's pictureDaily news stories by Lucas

Extraordinary Senate hearing: ChatGPT chief calls for regulation of AI

While students and educators alike try to utilise, instead of exploiting, artificial intelligence, alarm bells went off on Capitol Hill during an extraordinary Senate hearing on Tuesday.



Artificial intelligence could wreak havoc on the world as it advances.


That from the founder of the controversial AI platform ChatGPT who insists the U.S. government must step in to minimise the risks of AI.


Sam Altman, the CEO of OpenAI, said, "My worst fears are that we cause significant we, the field, the technology, the industry cause significant harm to the world. I think if this technology goes wrong, it can go quite wrong and we want to be vocal about that. We want to work with the government to prevent that from happening."



(Sam Altman, the CEO of OpenAI, testifying before a panel of Senators on Tuesday)


ChatGPT made history last year with its ability to give human-like responses.


Such was the case when the extraordinary Senate hearing began with opening remarks made by the AI-generated voice of Senator Richard Blumenthal.


"Too often, we have seen what happens when technology outpaces regulation."



With elections fast approaching, Altman sounded off the alarm to warn Americans that the misuse of AI could interfere with election integrity.


"The more general ability of these models to manipulate, to persuade, to provide sort of one-on-one interactive disinformation. But given that we're going to face an election next year and these models are getting better, I think this is a significant area of concern."



In an effort to safeguard the use of AI, Altman laid out his No. 1 goal.


"Number one, I would form a new agency that licenses any effort, a certain scale or capabilities, and can take that licence away and ensure compliance with safety standards."


Still, companies are using endless data and investing billions of dollars into AI.


One expert echoed Altman's sentiment saying the entire scientific community must play a part in AI regulation.


Gary Marcus, a New York University Emeritus Professor, said,

"AI is moving incredibly fast with lots of potential but also lots of risks."


"But we also need independent scientists, not just that we scientists can have a voice but so that we can participate directly in addressing the problems and evaluating solutions, and not just after products are released, but before," he added.

5 views0 comments

Comments


Post: Blog2 Post
bottom of page