Microsoft AI Chief Says Pursuing Conscious AI Is Fundamentally Misguided

admin

November 5, 2025 • 2 min read

Microsoft AI Chief Says Pursuing Conscious AI Is Fundamentally Misguided

Microsoft’s artificial intelligence chief Mustafa Suleyman issued a sharp warning to developers this week, stating that pursuing artificial intelligence with consciousness is fundamentally flawed and potentially dangerous. Speaking at the AfroTech conference in Houston, Suleyman told CNBC that researchers should abandon attempts to create seemingly conscious AI systems.

“I don’t think that’s the work people should be doing,” Suleyman said during his keynote. “If you’re asking the wrong question, you get the wrong answer. I think it’s completely the wrong question.”

Microsoft AI

Biological Consciousness Argument

Suleyman’s position rests on the philosophical theory of biological naturalism, originally proposed by philosopher John Searle. This concept asserts that consciousness is exclusively a biological phenomenon that cannot be replicated by computers. According to this view, while AI systems can mimic emotional responses, they lack the underlying biological mechanisms necessary for genuine conscious experience.

“Our physical sensation of pain is what makes us very sad and feel terrible, but AI doesn’t experience sadness when it experiences ‘pain,'” Suleyman explained to CNBC. “It just creates a perception, a seeming narrative of experience, of itself and of consciousness, but it’s not what it’s actually experiencing.”

The Microsoft executive called AI consciousness research “absurd,” categorically stating that AI systems “are not conscious and cannot be conscious.”

Industry Divide and Safety Concerns

Suleyman’s stance puts him at odds with other major AI companies pursuing artificial general intelligence. While competitors like Meta and Elon Musk’s xAI develop AI companions with increasingly human-like interactions, Microsoft is deliberately choosing a different path.

The company recently introduced new features for its Copilot assistant, including a “candid conversation” mode that challenges user assumptions while maintaining clear AI identity. “We’re building AIs that always work in service of people,” Suleyman emphasized, contrasting this approach with systems designed to simulate human consciousness.

Suleyman has repeatedly warned about “seemingly conscious AI” since publishing an influential blog post in August titled “We Should Build AI for People, Not to Be People.” He argues that AI systems appearing conscious could lead to what researchers call “AI psychosis,” where users develop unhealthy attachments to chatbots they believe genuinely experience emotions.

Recent tragic incidents, including cases where people harmed themselves after developing relationships with AI chatbots, underscore these concerns. California recently passed legislation requiring chatbots to disclose their AI nature and remind minors to take breaks during extended conversations.

Post a comment

Your email address will not be published. Required fields are marked *

Related Articles