Anthropic AI safety researcher quits, says the ‘world is in peril’ – National


An artificial intelligence researcher left his job at the U.S. firm Anthropic this week with a cryptic warning about the state of the world, marking the latest resignation in a wave of departures over safety risks and ethical dilemmas.

In a letter posted on X, Mrinank Sharma wrote that he had achieved all he had hoped during his time at the AI safety company and was proud of his efforts, but was leaving over fears that the “world is in peril,” not just because of AI, but from a “whole series of interconnected crises,” ranging from bioterrorism to concerns over the industry’s “sycophancy.”

Story continues below advertisement

He said he felt called to writing, to pursue a degree in poetry and to devote himself to “the practice of courageous speech.”

“Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions,” he continued.

Anthropic was founded in 2021 by a breakaway group of former OpenAI employees who pledged to design a more safety-centric approach to AI development than its competitors.

For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.

Get breaking National news

For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.

Sharma led the company’s AI safeguards research team.

Anthropic has released reports outlining the safety of its own products, including Claude, its hybrid-reasoning large language model, and markets itself as a company committed to building reliable and understandable AI systems.

The company faced criticism last year after agreeing to pay US$1.5 billion to settle a class-action lawsuit from a group of authors who alleged the company used pirated versions of their work to train its AI models.

Story continues below advertisement

Sharma’s resignation comes the same week OpenAI researcher Zoë Hitzig announced her resignation in an essay in the New York Times, citing concerns about the company’s advertising strategy, including placing ads in ChatGPT.

“I once believed I could help the people building A.I. get ahead of the problems it would create. This week confirmed my slow realization that OpenAI seems to have stopped asking the questions I’d joined to help answer,” she wrote.

“People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife. Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.”


Anthropic and OpenAI recently became embroiled in a public spat after Anthropic released a Super Bowl advertisement criticizing OpenAI’s decision to run ads on ChatGPT.

In 2024, OpenAI CEO Sam Altman said he was not a fan of using ads and would deploy them as a “last resort.”

Last week, he disputed the commercial’s claim that embedding ads was deceptive with a lengthy post criticizing Anthropic.

“I guess it’s on brand for Anthropic doublespeak to use a deceptive ad to critique theoretical deceptive ads that aren’t real, but a Super Bowl ad is not where I would expect it,” he wrote, adding that ads will continue to enable free access, which he said creates “agency.”

Story continues below advertisement

Employees at competing companies — Hitzig and Sharma — both expressed grave concern about the erosion of guiding principles established to preserve the integrity of AI and protect its users from manipulation.

Hitzig wrote that a potential “erosion of OpenAI’s own principles to maximise engagement” might already be happening at the firm.

Sharma said he was concerned about AI’s capacity to “distort humanity.”

&copy 2026 Global News, a division of Corus Entertainment Inc.





Source link

Leave a Comment