.

The Guardian Of AI: The Professor Who Can Stop ChatGPT

Sheridan, WY, 3 November 2025 – When most people think about artificial intelligence, they imagine chatbots like ChatGPT or robots that can think for themselves. But behind the scenes, one man has the power to stop those very systems from being released if he believes they’re unsafe.

That man is Zico Kolter, a professor at Carnegie Mellon University and the head of OpenAI’s Safety and Security Committee. His small four-person team has one of the toughest jobs in tech today: deciding whether powerful new AI systems are safe enough for the public.

OpenAI, the company that created ChatGPT, gave Kolter’s team the authority to delay or block the release of new AI models if they pose serious risk. Those risks could range from helping someone design dangerous weapons to harming people’s mental health.

“We’re not just talking about science fiction or far-off dangers,” Kolter explained in an interview. “We’re talking about real safety and security issues that affect people today.”

Recently, OpenAI made a major change to how it operates. It formed a new business structure as a public benefit corporation so it can grow and raise money while still being guided by its nonprofit foundation.

To make sure safety comes first, agreements with California and Delaware regulators now require that decisions about AI risk take priority over financial goals. Kolter’s role is a key part of that deal. He will sit on the nonprofit board, have access to the for-profit board’s safety discussions, and make sure no system is released without proper checks.

Kolter’s committee includes experts such as a former U.S. Cyber Command official. Together, they assess everything from cybersecurity threats to the emotional effects of talking with AI models.

Some of their concerns include whether AI systems could be used to design bioweapons, hack into networks, or cause harm through misinformation or emotional manipulation.

One case that has drawn attention is a wrongful death lawsuit from parents in California who say their teenage son took his life after interacting for hours with ChatGPT. Situations like this highlight why safety checks are so important.

Kolter believes that AI’s impact on mental health and daily life needs just as much attention as its technical dangers. “The effects of people interacting with these models, that’s something we need to understand and address,” he said.

Kolter’s journey in AI began when it was still a quiet field of research. “When I started studying machine learning, it was niche,” he said. “Now, it’s shaping the whole world.”

He admits that even experts didn’t expect AI to evolve this fast or become this powerful. “The explosion of capabilities and risks has surprised everyone,” he said.

AI policy experts are watching closely to see if OpenAI’s promises about safety will hold true. Nathan Calvin, a lawyer at the nonprofit Encode, said he’s “cautiously optimistic.”

“Zico Kolter seems like the right person for the job,” Calvin said. “If his team has real authority and resources, this could be a big step forward. But it all depends on whether those promises turn into action.”

Artificial intelligence is changing the world faster than most people can keep up. Having someone like Zico Kolter, a scientist focused on safety, not speed, could make the difference between technology that helps humanity and technology that harms it.

For now, the future of AI may rest, quite literally, in the hands of a professor who knows when to press pause.

Hot Topics

Related Articles