Artificial Intelligence (AI) is no longer just a futuristic buzzword—it is a fast-evolving reality that has already begun shaping our lives. With platforms like ChatGPT, Claude, and Gemini revolutionising tasks from customer service to creative writing, the stakes are only getting higher. But what happens when the very creators of this technology start sounding the alarm?
In a recent episode of This Past Weekend podcast with comedian and host Theo Von, Sam Altman, the CEO of OpenAI, made startling revelations about GPT-5—the upcoming iteration of the large language model that powers ChatGPT.
Summary Table: Sam Altman on GPT-5

Key Details |
Information |
---|---|
Subject |
Sam Altman’s comments on GPT-5 |
Context |
Podcast interview with Theo Von |
Major Concern |
GPT-5’s rapid advancement and unpredictability |
Notable Comparison |
Manhattan Project |
Quote on Oversight |
“There are no adults in the room” |
Implication |
Need for strong AI governance and ethical controls |
Indian Relevance |
Highlights AI governance gaps, relevance for India’s fast-growing tech scene |
Official Source |
“I Felt Very Nervous”: Altman Speaks Out
Altman revealed that during some internal testing sessions of GPT-5, he felt “very nervous.” This wasn’t due to technical glitches or performance issues. Instead, it stemmed from how fast, powerful, and unpredictable the model has become.
His analogy sent ripples through the tech community: “I sort of felt like it was the Manhattan Project.” For those unfamiliar, the Manhattan Project was the top-secret World War II initiative that developed the world’s first nuclear weapons. Comparing an AI model to such a historical milestone isn’t just dramatic—it’s a sign of how consequential this technology could be.
Beyond Capabilities: A Warning About Regulation
Rather than highlighting GPT-5’s speed, improved reasoning, or human-like dialogue, Altman repeatedly emphasized the lack of global AI governance. His blunt statement—“There are no adults in the room”—exposes a crucial blind spot in today’s AI race.
As countries race to develop or regulate artificial intelligence, oversight remains fragmented. While some regions like the European Union are proposing frameworks such as the EU AI Act, major players like the United States and India still lack comprehensive legal guidelines.
This is especially significant for India, which is rapidly integrating AI across sectors like education, healthcare, agriculture, and public governance.
“It Feels Bad and Dangerous”: Altman on AI Dependency
Altman also expressed his discomfort with society’s increasing reliance on AI. He warned, “Something about collectively deciding we’re going to live our lives the way AI tells us feels bad and dangerous.”
Such a statement, coming from the man leading the charge on generative AI, suggests a deeper ethical dilemma. AI systems like GPT-4 and GPT-5 are already being used to make hiring decisions, diagnose diseases, and teach students. But who ensures these decisions are fair, transparent, and accurate?
Implications for India: Why Should We Care?
India is the world’s largest democracy and one of the fastest adopters of digital technology. With the government promoting AI-driven initiatives under “Digital India” and major corporations investing in AI-powered services, the country is poised to become a global AI hub.
However, the regulatory environment remains nascent. If GPT-5 and similar models grow unchecked, India may soon find itself dealing with:
- Algorithmic bias in governance
- Job displacement across IT, BPO, and education sectors
- Privacy and surveillance concerns
- Over-reliance on foreign-owned AI tools
Altman’s concerns act as a timely reminder for Indian policymakers and tech leaders: robust AI infrastructure must be accompanied by robust oversight.
A Glimpse Into GPT-5’s Capabilities
Although OpenAI has not released GPT-5 publicly as of August 2025, leaks and early testers suggest it is far more:
- Context-aware: Retains and understands longer conversations
- Emotionally nuanced: Can infer user emotions better than previous models
- Multimodal: Understands and generates text, images, and possibly video
- Autonomous: Capable of executing tasks with minimal human instruction
These capabilities make GPT-5 a potential game-changer—but also a high-risk asset in the wrong hands.
What Altman Has Said Before About AI Risks
Altman has previously stated that artificial general intelligence (AGI)—a theoretical AI that can perform any intellectual task a human can—could go “quite wrong.” In testimony before the US Congress in 2023, he advocated for:
- AI licensing frameworks
- Red-team testing before model deployment
- International coordination to prevent misuse
But this time, his tone seemed more personal than corporate. He sounded like a scientist who’s peered into the abyss and come back with a warning.
FAQs: Frequently Asked Questions
1. What is GPT-5?
GPT-5 is the upcoming generative AI model from OpenAI, expected to significantly outperform GPT-4 in speed, accuracy, and versatility.
2. Why did Sam Altman compare GPT-5 to the Manhattan Project?
Altman used the comparison to highlight GPT-5’s power and the lack of proper governance—similar to the dangerous potential of nuclear energy when first developed.
3. Has GPT-5 been released in India?
As of now, GPT-5 is under internal testing. There is no official launch timeline for public or enterprise-level usage in India.
4. What are the risks of using GPT-5?
Risks include misinformation, bias, over-dependence, job displacement, and lack of ethical safeguards.
5. How can India regulate AI effectively?
India should develop an AI-specific legal framework, encourage transparent public-private partnerships, and invest in AI ethics research at institutions like IITs and IIITs.
Conclusion
Sam Altman’s recent comments serve as a crucial wake-up call. GPT-5 is not just another tech upgrade; it represents a shift in how humanity interacts with machines—and potentially, how decisions get made across societies.
For India, the stakes are even higher. As a global IT powerhouse with growing AI ambitions, the country must not merely adopt these tools but shape the ecosystem in which they operate. This includes investing in AI ethics, indigenous models, and independent oversight bodies.
Because if the CEO of OpenAI is scared of what he’s created, it’s time for the rest of us to start paying attention—not out of fear, but out of responsibility.
For More Information Click Here